00:00:00.001 Started by upstream project "autotest-per-patch" build number 126181 00:00:00.001 originally caused by: 00:00:00.001 Started by user sys_sgci 00:00:00.017 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.018 The recommended git tool is: git 00:00:00.018 using credential 00000000-0000-0000-0000-000000000002 00:00:00.019 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.036 Fetching changes from the remote Git repository 00:00:00.042 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.054 Using shallow fetch with depth 1 00:00:00.054 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.054 > git --version # timeout=10 00:00:00.064 > git --version # 'git version 2.39.2' 00:00:00.064 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.075 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.075 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:03.230 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:03.241 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:03.254 Checking out Revision 9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d (FETCH_HEAD) 00:00:03.254 > git config core.sparsecheckout # timeout=10 00:00:03.267 > git read-tree -mu HEAD # timeout=10 00:00:03.285 > git checkout -f 9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d # timeout=5 00:00:03.304 Commit message: "inventory: add WCP3 to free inventory" 00:00:03.305 > git rev-list --no-walk 9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d # timeout=10 00:00:03.444 [Pipeline] Start of Pipeline 00:00:03.459 [Pipeline] library 00:00:03.461 Loading library shm_lib@master 00:00:03.461 Library shm_lib@master is cached. Copying from home. 00:00:03.476 [Pipeline] node 00:00:03.484 Running on VM-host-SM9 in /var/jenkins/workspace/nvmf-tcp-vg-autotest 00:00:03.486 [Pipeline] { 00:00:03.494 [Pipeline] catchError 00:00:03.496 [Pipeline] { 00:00:03.506 [Pipeline] wrap 00:00:03.513 [Pipeline] { 00:00:03.519 [Pipeline] stage 00:00:03.521 [Pipeline] { (Prologue) 00:00:03.537 [Pipeline] echo 00:00:03.538 Node: VM-host-SM9 00:00:03.542 [Pipeline] cleanWs 00:00:03.548 [WS-CLEANUP] Deleting project workspace... 00:00:03.548 [WS-CLEANUP] Deferred wipeout is used... 00:00:03.553 [WS-CLEANUP] done 00:00:03.718 [Pipeline] setCustomBuildProperty 00:00:03.792 [Pipeline] httpRequest 00:00:03.817 [Pipeline] echo 00:00:03.818 Sorcerer 10.211.164.101 is alive 00:00:03.826 [Pipeline] httpRequest 00:00:03.829 HttpMethod: GET 00:00:03.830 URL: http://10.211.164.101/packages/jbp_9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d.tar.gz 00:00:03.830 Sending request to url: http://10.211.164.101/packages/jbp_9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d.tar.gz 00:00:03.831 Response Code: HTTP/1.1 200 OK 00:00:03.831 Success: Status code 200 is in the accepted range: 200,404 00:00:03.832 Saving response body to /var/jenkins/workspace/nvmf-tcp-vg-autotest/jbp_9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d.tar.gz 00:00:04.348 [Pipeline] sh 00:00:04.625 + tar --no-same-owner -xf jbp_9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d.tar.gz 00:00:04.641 [Pipeline] httpRequest 00:00:04.658 [Pipeline] echo 00:00:04.659 Sorcerer 10.211.164.101 is alive 00:00:04.666 [Pipeline] httpRequest 00:00:04.669 HttpMethod: GET 00:00:04.669 URL: http://10.211.164.101/packages/spdk_e7cce062d7bcec53f8a0237bb456695749792008.tar.gz 00:00:04.669 Sending request to url: http://10.211.164.101/packages/spdk_e7cce062d7bcec53f8a0237bb456695749792008.tar.gz 00:00:04.670 Response Code: HTTP/1.1 200 OK 00:00:04.671 Success: Status code 200 is in the accepted range: 200,404 00:00:04.671 Saving response body to /var/jenkins/workspace/nvmf-tcp-vg-autotest/spdk_e7cce062d7bcec53f8a0237bb456695749792008.tar.gz 00:00:22.859 [Pipeline] sh 00:00:23.132 + tar --no-same-owner -xf spdk_e7cce062d7bcec53f8a0237bb456695749792008.tar.gz 00:00:26.449 [Pipeline] sh 00:00:26.726 + git -C spdk log --oneline -n5 00:00:26.726 e7cce062d Examples/Perf: correct the calculation of total bandwidth 00:00:26.726 3b4b1d00c libvfio-user: bump MAX_DMA_REGIONS 00:00:26.726 32a79de81 lib/event: add disable_cpumask_locks to spdk_app_opts 00:00:26.726 719d03c6a sock/uring: only register net impl if supported 00:00:26.726 e64f085ad vbdev_lvol_ut: unify usage of dummy base bdev 00:00:26.745 [Pipeline] writeFile 00:00:26.761 [Pipeline] sh 00:00:27.040 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:00:27.051 [Pipeline] sh 00:00:27.329 + cat autorun-spdk.conf 00:00:27.329 SPDK_RUN_FUNCTIONAL_TEST=1 00:00:27.329 SPDK_TEST_NVMF=1 00:00:27.329 SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:27.329 SPDK_TEST_USDT=1 00:00:27.329 SPDK_TEST_NVMF_MDNS=1 00:00:27.329 SPDK_RUN_UBSAN=1 00:00:27.329 NET_TYPE=virt 00:00:27.329 SPDK_JSONRPC_GO_CLIENT=1 00:00:27.329 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:00:27.336 RUN_NIGHTLY=0 00:00:27.338 [Pipeline] } 00:00:27.355 [Pipeline] // stage 00:00:27.371 [Pipeline] stage 00:00:27.374 [Pipeline] { (Run VM) 00:00:27.388 [Pipeline] sh 00:00:27.666 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:00:27.666 + echo 'Start stage prepare_nvme.sh' 00:00:27.666 Start stage prepare_nvme.sh 00:00:27.666 + [[ -n 0 ]] 00:00:27.666 + disk_prefix=ex0 00:00:27.666 + [[ -n /var/jenkins/workspace/nvmf-tcp-vg-autotest ]] 00:00:27.666 + [[ -e /var/jenkins/workspace/nvmf-tcp-vg-autotest/autorun-spdk.conf ]] 00:00:27.666 + source /var/jenkins/workspace/nvmf-tcp-vg-autotest/autorun-spdk.conf 00:00:27.666 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:00:27.666 ++ SPDK_TEST_NVMF=1 00:00:27.666 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:27.666 ++ SPDK_TEST_USDT=1 00:00:27.666 ++ SPDK_TEST_NVMF_MDNS=1 00:00:27.666 ++ SPDK_RUN_UBSAN=1 00:00:27.666 ++ NET_TYPE=virt 00:00:27.666 ++ SPDK_JSONRPC_GO_CLIENT=1 00:00:27.666 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:00:27.666 ++ RUN_NIGHTLY=0 00:00:27.666 + cd /var/jenkins/workspace/nvmf-tcp-vg-autotest 00:00:27.666 + nvme_files=() 00:00:27.666 + declare -A nvme_files 00:00:27.666 + backend_dir=/var/lib/libvirt/images/backends 00:00:27.666 + nvme_files['nvme.img']=5G 00:00:27.666 + nvme_files['nvme-cmb.img']=5G 00:00:27.666 + nvme_files['nvme-multi0.img']=4G 00:00:27.666 + nvme_files['nvme-multi1.img']=4G 00:00:27.666 + nvme_files['nvme-multi2.img']=4G 00:00:27.666 + nvme_files['nvme-openstack.img']=8G 00:00:27.666 + nvme_files['nvme-zns.img']=5G 00:00:27.666 + (( SPDK_TEST_NVME_PMR == 1 )) 00:00:27.666 + (( SPDK_TEST_FTL == 1 )) 00:00:27.666 + (( SPDK_TEST_NVME_FDP == 1 )) 00:00:27.666 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:00:27.666 + for nvme in "${!nvme_files[@]}" 00:00:27.666 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex0-nvme-multi2.img -s 4G 00:00:27.666 Formatting '/var/lib/libvirt/images/backends/ex0-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:00:27.666 + for nvme in "${!nvme_files[@]}" 00:00:27.666 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex0-nvme-cmb.img -s 5G 00:00:27.666 Formatting '/var/lib/libvirt/images/backends/ex0-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:00:27.666 + for nvme in "${!nvme_files[@]}" 00:00:27.666 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex0-nvme-openstack.img -s 8G 00:00:27.666 Formatting '/var/lib/libvirt/images/backends/ex0-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:00:27.666 + for nvme in "${!nvme_files[@]}" 00:00:27.666 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex0-nvme-zns.img -s 5G 00:00:27.924 Formatting '/var/lib/libvirt/images/backends/ex0-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:00:27.924 + for nvme in "${!nvme_files[@]}" 00:00:27.924 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex0-nvme-multi1.img -s 4G 00:00:27.924 Formatting '/var/lib/libvirt/images/backends/ex0-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:00:27.924 + for nvme in "${!nvme_files[@]}" 00:00:27.924 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex0-nvme-multi0.img -s 4G 00:00:28.585 Formatting '/var/lib/libvirt/images/backends/ex0-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:00:28.585 + for nvme in "${!nvme_files[@]}" 00:00:28.585 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex0-nvme.img -s 5G 00:00:28.845 Formatting '/var/lib/libvirt/images/backends/ex0-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:00:28.845 ++ sudo grep -rl ex0-nvme.img /etc/libvirt/qemu 00:00:28.845 + echo 'End stage prepare_nvme.sh' 00:00:28.845 End stage prepare_nvme.sh 00:00:28.857 [Pipeline] sh 00:00:29.137 + DISTRO=fedora38 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:00:29.137 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex0-nvme.img -b /var/lib/libvirt/images/backends/ex0-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex0-nvme-multi1.img:/var/lib/libvirt/images/backends/ex0-nvme-multi2.img -H -a -v -f fedora38 00:00:29.137 00:00:29.137 DIR=/var/jenkins/workspace/nvmf-tcp-vg-autotest/spdk/scripts/vagrant 00:00:29.137 SPDK_DIR=/var/jenkins/workspace/nvmf-tcp-vg-autotest/spdk 00:00:29.137 VAGRANT_TARGET=/var/jenkins/workspace/nvmf-tcp-vg-autotest 00:00:29.137 HELP=0 00:00:29.137 DRY_RUN=0 00:00:29.137 NVME_FILE=/var/lib/libvirt/images/backends/ex0-nvme.img,/var/lib/libvirt/images/backends/ex0-nvme-multi0.img, 00:00:29.137 NVME_DISKS_TYPE=nvme,nvme, 00:00:29.137 NVME_AUTO_CREATE=0 00:00:29.137 NVME_DISKS_NAMESPACES=,/var/lib/libvirt/images/backends/ex0-nvme-multi1.img:/var/lib/libvirt/images/backends/ex0-nvme-multi2.img, 00:00:29.137 NVME_CMB=,, 00:00:29.137 NVME_PMR=,, 00:00:29.137 NVME_ZNS=,, 00:00:29.137 NVME_MS=,, 00:00:29.137 NVME_FDP=,, 00:00:29.137 SPDK_VAGRANT_DISTRO=fedora38 00:00:29.137 SPDK_VAGRANT_VMCPU=10 00:00:29.137 SPDK_VAGRANT_VMRAM=12288 00:00:29.137 SPDK_VAGRANT_PROVIDER=libvirt 00:00:29.137 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:00:29.137 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:00:29.137 SPDK_OPENSTACK_NETWORK=0 00:00:29.137 VAGRANT_PACKAGE_BOX=0 00:00:29.137 VAGRANTFILE=/var/jenkins/workspace/nvmf-tcp-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:00:29.137 FORCE_DISTRO=true 00:00:29.137 VAGRANT_BOX_VERSION= 00:00:29.137 EXTRA_VAGRANTFILES= 00:00:29.137 NIC_MODEL=e1000 00:00:29.137 00:00:29.137 mkdir: created directory '/var/jenkins/workspace/nvmf-tcp-vg-autotest/fedora38-libvirt' 00:00:29.137 /var/jenkins/workspace/nvmf-tcp-vg-autotest/fedora38-libvirt /var/jenkins/workspace/nvmf-tcp-vg-autotest 00:00:32.423 Bringing machine 'default' up with 'libvirt' provider... 00:00:33.358 ==> default: Creating image (snapshot of base box volume). 00:00:33.358 ==> default: Creating domain with the following settings... 00:00:33.358 ==> default: -- Name: fedora38-38-1.6-1716830599-074-updated-1705279005_default_1721042350_9ebb730ebdc79aeaf08d 00:00:33.358 ==> default: -- Domain type: kvm 00:00:33.358 ==> default: -- Cpus: 10 00:00:33.358 ==> default: -- Feature: acpi 00:00:33.358 ==> default: -- Feature: apic 00:00:33.358 ==> default: -- Feature: pae 00:00:33.358 ==> default: -- Memory: 12288M 00:00:33.358 ==> default: -- Memory Backing: hugepages: 00:00:33.358 ==> default: -- Management MAC: 00:00:33.358 ==> default: -- Loader: 00:00:33.358 ==> default: -- Nvram: 00:00:33.358 ==> default: -- Base box: spdk/fedora38 00:00:33.358 ==> default: -- Storage pool: default 00:00:33.358 ==> default: -- Image: /var/lib/libvirt/images/fedora38-38-1.6-1716830599-074-updated-1705279005_default_1721042350_9ebb730ebdc79aeaf08d.img (20G) 00:00:33.358 ==> default: -- Volume Cache: default 00:00:33.358 ==> default: -- Kernel: 00:00:33.358 ==> default: -- Initrd: 00:00:33.358 ==> default: -- Graphics Type: vnc 00:00:33.358 ==> default: -- Graphics Port: -1 00:00:33.358 ==> default: -- Graphics IP: 127.0.0.1 00:00:33.358 ==> default: -- Graphics Password: Not defined 00:00:33.358 ==> default: -- Video Type: cirrus 00:00:33.358 ==> default: -- Video VRAM: 9216 00:00:33.358 ==> default: -- Sound Type: 00:00:33.358 ==> default: -- Keymap: en-us 00:00:33.358 ==> default: -- TPM Path: 00:00:33.358 ==> default: -- INPUT: type=mouse, bus=ps2 00:00:33.358 ==> default: -- Command line args: 00:00:33.358 ==> default: -> value=-device, 00:00:33.358 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:00:33.358 ==> default: -> value=-drive, 00:00:33.358 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex0-nvme.img,if=none,id=nvme-0-drive0, 00:00:33.358 ==> default: -> value=-device, 00:00:33.358 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:33.358 ==> default: -> value=-device, 00:00:33.358 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:00:33.358 ==> default: -> value=-drive, 00:00:33.358 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex0-nvme-multi0.img,if=none,id=nvme-1-drive0, 00:00:33.358 ==> default: -> value=-device, 00:00:33.358 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:33.358 ==> default: -> value=-drive, 00:00:33.358 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex0-nvme-multi1.img,if=none,id=nvme-1-drive1, 00:00:33.358 ==> default: -> value=-device, 00:00:33.358 ==> default: -> value=nvme-ns,drive=nvme-1-drive1,bus=nvme-1,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:33.358 ==> default: -> value=-drive, 00:00:33.358 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex0-nvme-multi2.img,if=none,id=nvme-1-drive2, 00:00:33.358 ==> default: -> value=-device, 00:00:33.358 ==> default: -> value=nvme-ns,drive=nvme-1-drive2,bus=nvme-1,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:33.358 ==> default: Creating shared folders metadata... 00:00:33.651 ==> default: Starting domain. 00:00:35.047 ==> default: Waiting for domain to get an IP address... 00:00:53.125 ==> default: Waiting for SSH to become available... 00:00:54.498 ==> default: Configuring and enabling network interfaces... 00:00:58.681 default: SSH address: 192.168.121.81:22 00:00:58.681 default: SSH username: vagrant 00:00:58.681 default: SSH auth method: private key 00:01:00.580 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:01:08.692 ==> default: Mounting SSHFS shared folder... 00:01:09.286 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-vg-autotest/fedora38-libvirt/output => /home/vagrant/spdk_repo/output 00:01:09.286 ==> default: Checking Mount.. 00:01:10.686 ==> default: Folder Successfully Mounted! 00:01:10.686 ==> default: Running provisioner: file... 00:01:11.251 default: ~/.gitconfig => .gitconfig 00:01:11.815 00:01:11.815 SUCCESS! 00:01:11.815 00:01:11.815 cd to /var/jenkins/workspace/nvmf-tcp-vg-autotest/fedora38-libvirt and type "vagrant ssh" to use. 00:01:11.815 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:01:11.815 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/nvmf-tcp-vg-autotest/fedora38-libvirt" to destroy all trace of vm. 00:01:11.815 00:01:11.825 [Pipeline] } 00:01:11.843 [Pipeline] // stage 00:01:11.854 [Pipeline] dir 00:01:11.855 Running in /var/jenkins/workspace/nvmf-tcp-vg-autotest/fedora38-libvirt 00:01:11.856 [Pipeline] { 00:01:11.872 [Pipeline] catchError 00:01:11.874 [Pipeline] { 00:01:11.889 [Pipeline] sh 00:01:12.164 + vagrant ssh-config --host vagrant 00:01:12.164 + sed -ne /^Host/,$p 00:01:12.164 + tee ssh_conf 00:01:16.385 Host vagrant 00:01:16.385 HostName 192.168.121.81 00:01:16.385 User vagrant 00:01:16.385 Port 22 00:01:16.385 UserKnownHostsFile /dev/null 00:01:16.385 StrictHostKeyChecking no 00:01:16.385 PasswordAuthentication no 00:01:16.385 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora38/38-1.6-1716830599-074-updated-1705279005/libvirt/fedora38 00:01:16.385 IdentitiesOnly yes 00:01:16.385 LogLevel FATAL 00:01:16.385 ForwardAgent yes 00:01:16.385 ForwardX11 yes 00:01:16.385 00:01:16.398 [Pipeline] withEnv 00:01:16.400 [Pipeline] { 00:01:16.415 [Pipeline] sh 00:01:16.692 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:01:16.692 source /etc/os-release 00:01:16.692 [[ -e /image.version ]] && img=$(< /image.version) 00:01:16.692 # Minimal, systemd-like check. 00:01:16.692 if [[ -e /.dockerenv ]]; then 00:01:16.692 # Clear garbage from the node's name: 00:01:16.692 # agt-er_autotest_547-896 -> autotest_547-896 00:01:16.692 # $HOSTNAME is the actual container id 00:01:16.692 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:01:16.692 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:01:16.692 # We can assume this is a mount from a host where container is running, 00:01:16.692 # so fetch its hostname to easily identify the target swarm worker. 00:01:16.692 container="$(< /etc/hostname) ($agent)" 00:01:16.692 else 00:01:16.692 # Fallback 00:01:16.692 container=$agent 00:01:16.692 fi 00:01:16.692 fi 00:01:16.692 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:01:16.692 00:01:16.703 [Pipeline] } 00:01:16.723 [Pipeline] // withEnv 00:01:16.733 [Pipeline] setCustomBuildProperty 00:01:16.750 [Pipeline] stage 00:01:16.753 [Pipeline] { (Tests) 00:01:16.773 [Pipeline] sh 00:01:17.052 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:01:17.066 [Pipeline] sh 00:01:17.345 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:01:17.365 [Pipeline] timeout 00:01:17.365 Timeout set to expire in 40 min 00:01:17.367 [Pipeline] { 00:01:17.383 [Pipeline] sh 00:01:17.659 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:01:18.240 HEAD is now at e7cce062d Examples/Perf: correct the calculation of total bandwidth 00:01:18.253 [Pipeline] sh 00:01:18.529 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:01:18.545 [Pipeline] sh 00:01:18.821 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:01:19.095 [Pipeline] sh 00:01:19.372 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant JOB_BASE_NAME=nvmf-tcp-vg-autotest ./autoruner.sh spdk_repo 00:01:19.373 ++ readlink -f spdk_repo 00:01:19.373 + DIR_ROOT=/home/vagrant/spdk_repo 00:01:19.373 + [[ -n /home/vagrant/spdk_repo ]] 00:01:19.373 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:01:19.373 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:01:19.630 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:01:19.630 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:01:19.630 + [[ -d /home/vagrant/spdk_repo/output ]] 00:01:19.630 + [[ nvmf-tcp-vg-autotest == pkgdep-* ]] 00:01:19.630 + cd /home/vagrant/spdk_repo 00:01:19.630 + source /etc/os-release 00:01:19.630 ++ NAME='Fedora Linux' 00:01:19.630 ++ VERSION='38 (Cloud Edition)' 00:01:19.630 ++ ID=fedora 00:01:19.630 ++ VERSION_ID=38 00:01:19.630 ++ VERSION_CODENAME= 00:01:19.630 ++ PLATFORM_ID=platform:f38 00:01:19.630 ++ PRETTY_NAME='Fedora Linux 38 (Cloud Edition)' 00:01:19.630 ++ ANSI_COLOR='0;38;2;60;110;180' 00:01:19.630 ++ LOGO=fedora-logo-icon 00:01:19.630 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:38 00:01:19.630 ++ HOME_URL=https://fedoraproject.org/ 00:01:19.630 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/ 00:01:19.630 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:01:19.630 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:01:19.630 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:01:19.630 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=38 00:01:19.630 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:01:19.630 ++ REDHAT_SUPPORT_PRODUCT_VERSION=38 00:01:19.630 ++ SUPPORT_END=2024-05-14 00:01:19.630 ++ VARIANT='Cloud Edition' 00:01:19.630 ++ VARIANT_ID=cloud 00:01:19.630 + uname -a 00:01:19.630 Linux fedora38-cloud-1716830599-074-updated-1705279005 6.7.0-68.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Jan 15 00:59:40 UTC 2024 x86_64 GNU/Linux 00:01:19.630 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:01:19.889 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:01:19.889 Hugepages 00:01:19.889 node hugesize free / total 00:01:19.889 node0 1048576kB 0 / 0 00:01:19.889 node0 2048kB 0 / 0 00:01:19.889 00:01:19.889 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:19.889 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:01:19.889 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:01:20.154 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:01:20.154 + rm -f /tmp/spdk-ld-path 00:01:20.154 + source autorun-spdk.conf 00:01:20.154 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:20.154 ++ SPDK_TEST_NVMF=1 00:01:20.154 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:20.154 ++ SPDK_TEST_USDT=1 00:01:20.154 ++ SPDK_TEST_NVMF_MDNS=1 00:01:20.154 ++ SPDK_RUN_UBSAN=1 00:01:20.154 ++ NET_TYPE=virt 00:01:20.154 ++ SPDK_JSONRPC_GO_CLIENT=1 00:01:20.154 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:20.154 ++ RUN_NIGHTLY=0 00:01:20.154 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:20.154 + [[ -n '' ]] 00:01:20.154 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:01:20.154 + for M in /var/spdk/build-*-manifest.txt 00:01:20.154 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:20.154 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:20.154 + for M in /var/spdk/build-*-manifest.txt 00:01:20.154 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:20.154 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:20.154 ++ uname 00:01:20.154 + [[ Linux == \L\i\n\u\x ]] 00:01:20.154 + sudo dmesg -T 00:01:20.154 + sudo dmesg --clear 00:01:20.154 + dmesg_pid=5156 00:01:20.154 + [[ Fedora Linux == FreeBSD ]] 00:01:20.154 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:20.154 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:20.154 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:20.154 + [[ -x /usr/src/fio-static/fio ]] 00:01:20.154 + sudo dmesg -Tw 00:01:20.154 + export FIO_BIN=/usr/src/fio-static/fio 00:01:20.154 + FIO_BIN=/usr/src/fio-static/fio 00:01:20.154 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:20.154 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:20.154 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:20.154 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:20.154 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:20.154 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:20.154 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:20.154 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:20.154 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:01:20.154 Test configuration: 00:01:20.154 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:20.154 SPDK_TEST_NVMF=1 00:01:20.154 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:20.154 SPDK_TEST_USDT=1 00:01:20.154 SPDK_TEST_NVMF_MDNS=1 00:01:20.154 SPDK_RUN_UBSAN=1 00:01:20.154 NET_TYPE=virt 00:01:20.154 SPDK_JSONRPC_GO_CLIENT=1 00:01:20.154 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:20.154 RUN_NIGHTLY=0 11:19:57 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:01:20.154 11:19:57 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:20.154 11:19:57 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:20.154 11:19:57 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:20.154 11:19:57 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:20.154 11:19:57 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:20.154 11:19:57 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:20.154 11:19:57 -- paths/export.sh@5 -- $ export PATH 00:01:20.154 11:19:57 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:20.154 11:19:57 -- common/autobuild_common.sh@443 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:01:20.154 11:19:57 -- common/autobuild_common.sh@444 -- $ date +%s 00:01:20.154 11:19:57 -- common/autobuild_common.sh@444 -- $ mktemp -dt spdk_1721042397.XXXXXX 00:01:20.154 11:19:57 -- common/autobuild_common.sh@444 -- $ SPDK_WORKSPACE=/tmp/spdk_1721042397.3OsL2P 00:01:20.154 11:19:57 -- common/autobuild_common.sh@446 -- $ [[ -n '' ]] 00:01:20.154 11:19:57 -- common/autobuild_common.sh@450 -- $ '[' -n '' ']' 00:01:20.154 11:19:57 -- common/autobuild_common.sh@453 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:01:20.154 11:19:57 -- common/autobuild_common.sh@457 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:01:20.154 11:19:57 -- common/autobuild_common.sh@459 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:01:20.154 11:19:57 -- common/autobuild_common.sh@460 -- $ get_config_params 00:01:20.154 11:19:57 -- common/autotest_common.sh@396 -- $ xtrace_disable 00:01:20.154 11:19:57 -- common/autotest_common.sh@10 -- $ set +x 00:01:20.154 11:19:57 -- common/autobuild_common.sh@460 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-avahi --with-golang' 00:01:20.154 11:19:57 -- common/autobuild_common.sh@462 -- $ start_monitor_resources 00:01:20.154 11:19:57 -- pm/common@17 -- $ local monitor 00:01:20.154 11:19:57 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:20.154 11:19:57 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:20.154 11:19:57 -- pm/common@25 -- $ sleep 1 00:01:20.154 11:19:57 -- pm/common@21 -- $ date +%s 00:01:20.154 11:19:57 -- pm/common@21 -- $ date +%s 00:01:20.154 11:19:57 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1721042397 00:01:20.154 11:19:57 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1721042397 00:01:20.412 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1721042397_collect-vmstat.pm.log 00:01:20.412 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1721042397_collect-cpu-load.pm.log 00:01:21.345 11:19:58 -- common/autobuild_common.sh@463 -- $ trap stop_monitor_resources EXIT 00:01:21.345 11:19:58 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:21.345 11:19:58 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:21.345 11:19:58 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:01:21.345 11:19:58 -- spdk/autobuild.sh@16 -- $ date -u 00:01:21.345 Mon Jul 15 11:19:58 AM UTC 2024 00:01:21.345 11:19:58 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:21.345 v24.09-pre-205-ge7cce062d 00:01:21.345 11:19:58 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:01:21.345 11:19:58 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:21.345 11:19:58 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:21.345 11:19:58 -- common/autotest_common.sh@1099 -- $ '[' 3 -le 1 ']' 00:01:21.345 11:19:58 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:01:21.345 11:19:58 -- common/autotest_common.sh@10 -- $ set +x 00:01:21.346 ************************************ 00:01:21.346 START TEST ubsan 00:01:21.346 ************************************ 00:01:21.346 using ubsan 00:01:21.346 11:19:58 ubsan -- common/autotest_common.sh@1123 -- $ echo 'using ubsan' 00:01:21.346 00:01:21.346 real 0m0.000s 00:01:21.346 user 0m0.000s 00:01:21.346 sys 0m0.000s 00:01:21.346 11:19:58 ubsan -- common/autotest_common.sh@1124 -- $ xtrace_disable 00:01:21.346 11:19:58 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:01:21.346 ************************************ 00:01:21.346 END TEST ubsan 00:01:21.346 ************************************ 00:01:21.346 11:19:58 -- common/autotest_common.sh@1142 -- $ return 0 00:01:21.346 11:19:58 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:01:21.346 11:19:58 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:01:21.346 11:19:58 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:01:21.346 11:19:58 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:01:21.346 11:19:58 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:01:21.346 11:19:58 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:01:21.346 11:19:58 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:01:21.346 11:19:58 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:01:21.346 11:19:58 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-avahi --with-golang --with-shared 00:01:21.346 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:01:21.346 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:01:21.912 Using 'verbs' RDMA provider 00:01:35.042 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:01:49.918 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:01:49.918 go version go1.21.1 linux/amd64 00:01:49.918 Creating mk/config.mk...done. 00:01:49.918 Creating mk/cc.flags.mk...done. 00:01:49.918 Type 'make' to build. 00:01:49.918 11:20:25 -- spdk/autobuild.sh@69 -- $ run_test make make -j10 00:01:49.918 11:20:25 -- common/autotest_common.sh@1099 -- $ '[' 3 -le 1 ']' 00:01:49.918 11:20:25 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:01:49.918 11:20:25 -- common/autotest_common.sh@10 -- $ set +x 00:01:49.918 ************************************ 00:01:49.918 START TEST make 00:01:49.918 ************************************ 00:01:49.918 11:20:25 make -- common/autotest_common.sh@1123 -- $ make -j10 00:01:49.918 make[1]: Nothing to be done for 'all'. 00:02:07.994 The Meson build system 00:02:07.994 Version: 1.3.1 00:02:07.994 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 00:02:07.994 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:02:07.994 Build type: native build 00:02:07.994 Program cat found: YES (/usr/bin/cat) 00:02:07.994 Project name: DPDK 00:02:07.994 Project version: 24.03.0 00:02:07.994 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:02:07.994 C linker for the host machine: cc ld.bfd 2.39-16 00:02:07.994 Host machine cpu family: x86_64 00:02:07.994 Host machine cpu: x86_64 00:02:07.994 Message: ## Building in Developer Mode ## 00:02:07.994 Program pkg-config found: YES (/usr/bin/pkg-config) 00:02:07.994 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:02:07.994 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:02:07.994 Program python3 found: YES (/usr/bin/python3) 00:02:07.994 Program cat found: YES (/usr/bin/cat) 00:02:07.994 Compiler for C supports arguments -march=native: YES 00:02:07.994 Checking for size of "void *" : 8 00:02:07.994 Checking for size of "void *" : 8 (cached) 00:02:07.994 Compiler for C supports link arguments -Wl,--undefined-version: NO 00:02:07.994 Library m found: YES 00:02:07.994 Library numa found: YES 00:02:07.994 Has header "numaif.h" : YES 00:02:07.994 Library fdt found: NO 00:02:07.994 Library execinfo found: NO 00:02:07.994 Has header "execinfo.h" : YES 00:02:07.994 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:02:07.994 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:07.994 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:07.994 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:07.994 Run-time dependency openssl found: YES 3.0.9 00:02:07.994 Run-time dependency libpcap found: YES 1.10.4 00:02:07.994 Has header "pcap.h" with dependency libpcap: YES 00:02:07.994 Compiler for C supports arguments -Wcast-qual: YES 00:02:07.994 Compiler for C supports arguments -Wdeprecated: YES 00:02:07.994 Compiler for C supports arguments -Wformat: YES 00:02:07.994 Compiler for C supports arguments -Wformat-nonliteral: NO 00:02:07.994 Compiler for C supports arguments -Wformat-security: NO 00:02:07.994 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:07.994 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:07.994 Compiler for C supports arguments -Wnested-externs: YES 00:02:07.994 Compiler for C supports arguments -Wold-style-definition: YES 00:02:07.994 Compiler for C supports arguments -Wpointer-arith: YES 00:02:07.994 Compiler for C supports arguments -Wsign-compare: YES 00:02:07.994 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:07.994 Compiler for C supports arguments -Wundef: YES 00:02:07.994 Compiler for C supports arguments -Wwrite-strings: YES 00:02:07.994 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:07.994 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:02:07.994 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:07.994 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:02:07.994 Program objdump found: YES (/usr/bin/objdump) 00:02:07.994 Compiler for C supports arguments -mavx512f: YES 00:02:07.994 Checking if "AVX512 checking" compiles: YES 00:02:07.994 Fetching value of define "__SSE4_2__" : 1 00:02:07.994 Fetching value of define "__AES__" : 1 00:02:07.994 Fetching value of define "__AVX__" : 1 00:02:07.994 Fetching value of define "__AVX2__" : 1 00:02:07.994 Fetching value of define "__AVX512BW__" : (undefined) 00:02:07.994 Fetching value of define "__AVX512CD__" : (undefined) 00:02:07.994 Fetching value of define "__AVX512DQ__" : (undefined) 00:02:07.994 Fetching value of define "__AVX512F__" : (undefined) 00:02:07.994 Fetching value of define "__AVX512VL__" : (undefined) 00:02:07.994 Fetching value of define "__PCLMUL__" : 1 00:02:07.994 Fetching value of define "__RDRND__" : 1 00:02:07.994 Fetching value of define "__RDSEED__" : 1 00:02:07.994 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:02:07.994 Fetching value of define "__znver1__" : (undefined) 00:02:07.994 Fetching value of define "__znver2__" : (undefined) 00:02:07.994 Fetching value of define "__znver3__" : (undefined) 00:02:07.994 Fetching value of define "__znver4__" : (undefined) 00:02:07.994 Compiler for C supports arguments -Wno-format-truncation: YES 00:02:07.994 Message: lib/log: Defining dependency "log" 00:02:07.994 Message: lib/kvargs: Defining dependency "kvargs" 00:02:07.994 Message: lib/telemetry: Defining dependency "telemetry" 00:02:07.994 Checking for function "getentropy" : NO 00:02:07.994 Message: lib/eal: Defining dependency "eal" 00:02:07.994 Message: lib/ring: Defining dependency "ring" 00:02:07.994 Message: lib/rcu: Defining dependency "rcu" 00:02:07.994 Message: lib/mempool: Defining dependency "mempool" 00:02:07.994 Message: lib/mbuf: Defining dependency "mbuf" 00:02:07.994 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:07.994 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:02:07.994 Compiler for C supports arguments -mpclmul: YES 00:02:07.994 Compiler for C supports arguments -maes: YES 00:02:07.994 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:07.994 Compiler for C supports arguments -mavx512bw: YES 00:02:07.994 Compiler for C supports arguments -mavx512dq: YES 00:02:07.994 Compiler for C supports arguments -mavx512vl: YES 00:02:07.994 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:07.994 Compiler for C supports arguments -mavx2: YES 00:02:07.994 Compiler for C supports arguments -mavx: YES 00:02:07.994 Message: lib/net: Defining dependency "net" 00:02:07.994 Message: lib/meter: Defining dependency "meter" 00:02:07.994 Message: lib/ethdev: Defining dependency "ethdev" 00:02:07.994 Message: lib/pci: Defining dependency "pci" 00:02:07.994 Message: lib/cmdline: Defining dependency "cmdline" 00:02:07.994 Message: lib/hash: Defining dependency "hash" 00:02:07.994 Message: lib/timer: Defining dependency "timer" 00:02:07.994 Message: lib/compressdev: Defining dependency "compressdev" 00:02:07.994 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:07.994 Message: lib/dmadev: Defining dependency "dmadev" 00:02:07.994 Compiler for C supports arguments -Wno-cast-qual: YES 00:02:07.994 Message: lib/power: Defining dependency "power" 00:02:07.994 Message: lib/reorder: Defining dependency "reorder" 00:02:07.994 Message: lib/security: Defining dependency "security" 00:02:07.994 Has header "linux/userfaultfd.h" : YES 00:02:07.994 Has header "linux/vduse.h" : YES 00:02:07.994 Message: lib/vhost: Defining dependency "vhost" 00:02:07.995 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:02:07.995 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:07.995 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:07.995 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:07.995 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:02:07.995 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:02:07.995 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:02:07.995 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:02:07.995 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:02:07.995 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:02:07.995 Program doxygen found: YES (/usr/bin/doxygen) 00:02:07.995 Configuring doxy-api-html.conf using configuration 00:02:07.995 Configuring doxy-api-man.conf using configuration 00:02:07.995 Program mandb found: YES (/usr/bin/mandb) 00:02:07.995 Program sphinx-build found: NO 00:02:07.995 Configuring rte_build_config.h using configuration 00:02:07.995 Message: 00:02:07.995 ================= 00:02:07.995 Applications Enabled 00:02:07.995 ================= 00:02:07.995 00:02:07.995 apps: 00:02:07.995 00:02:07.995 00:02:07.995 Message: 00:02:07.995 ================= 00:02:07.995 Libraries Enabled 00:02:07.995 ================= 00:02:07.995 00:02:07.995 libs: 00:02:07.995 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:02:07.995 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:02:07.995 cryptodev, dmadev, power, reorder, security, vhost, 00:02:07.995 00:02:07.995 Message: 00:02:07.995 =============== 00:02:07.995 Drivers Enabled 00:02:07.995 =============== 00:02:07.995 00:02:07.995 common: 00:02:07.995 00:02:07.995 bus: 00:02:07.995 pci, vdev, 00:02:07.995 mempool: 00:02:07.995 ring, 00:02:07.995 dma: 00:02:07.995 00:02:07.995 net: 00:02:07.995 00:02:07.995 crypto: 00:02:07.995 00:02:07.995 compress: 00:02:07.995 00:02:07.995 vdpa: 00:02:07.995 00:02:07.995 00:02:07.995 Message: 00:02:07.995 ================= 00:02:07.995 Content Skipped 00:02:07.995 ================= 00:02:07.995 00:02:07.995 apps: 00:02:07.995 dumpcap: explicitly disabled via build config 00:02:07.995 graph: explicitly disabled via build config 00:02:07.995 pdump: explicitly disabled via build config 00:02:07.995 proc-info: explicitly disabled via build config 00:02:07.995 test-acl: explicitly disabled via build config 00:02:07.995 test-bbdev: explicitly disabled via build config 00:02:07.995 test-cmdline: explicitly disabled via build config 00:02:07.995 test-compress-perf: explicitly disabled via build config 00:02:07.995 test-crypto-perf: explicitly disabled via build config 00:02:07.995 test-dma-perf: explicitly disabled via build config 00:02:07.995 test-eventdev: explicitly disabled via build config 00:02:07.995 test-fib: explicitly disabled via build config 00:02:07.995 test-flow-perf: explicitly disabled via build config 00:02:07.995 test-gpudev: explicitly disabled via build config 00:02:07.995 test-mldev: explicitly disabled via build config 00:02:07.995 test-pipeline: explicitly disabled via build config 00:02:07.995 test-pmd: explicitly disabled via build config 00:02:07.995 test-regex: explicitly disabled via build config 00:02:07.995 test-sad: explicitly disabled via build config 00:02:07.995 test-security-perf: explicitly disabled via build config 00:02:07.995 00:02:07.995 libs: 00:02:07.995 argparse: explicitly disabled via build config 00:02:07.995 metrics: explicitly disabled via build config 00:02:07.995 acl: explicitly disabled via build config 00:02:07.995 bbdev: explicitly disabled via build config 00:02:07.995 bitratestats: explicitly disabled via build config 00:02:07.995 bpf: explicitly disabled via build config 00:02:07.995 cfgfile: explicitly disabled via build config 00:02:07.995 distributor: explicitly disabled via build config 00:02:07.995 efd: explicitly disabled via build config 00:02:07.995 eventdev: explicitly disabled via build config 00:02:07.995 dispatcher: explicitly disabled via build config 00:02:07.995 gpudev: explicitly disabled via build config 00:02:07.995 gro: explicitly disabled via build config 00:02:07.995 gso: explicitly disabled via build config 00:02:07.995 ip_frag: explicitly disabled via build config 00:02:07.995 jobstats: explicitly disabled via build config 00:02:07.995 latencystats: explicitly disabled via build config 00:02:07.995 lpm: explicitly disabled via build config 00:02:07.995 member: explicitly disabled via build config 00:02:07.995 pcapng: explicitly disabled via build config 00:02:07.995 rawdev: explicitly disabled via build config 00:02:07.995 regexdev: explicitly disabled via build config 00:02:07.995 mldev: explicitly disabled via build config 00:02:07.995 rib: explicitly disabled via build config 00:02:07.995 sched: explicitly disabled via build config 00:02:07.995 stack: explicitly disabled via build config 00:02:07.995 ipsec: explicitly disabled via build config 00:02:07.995 pdcp: explicitly disabled via build config 00:02:07.995 fib: explicitly disabled via build config 00:02:07.995 port: explicitly disabled via build config 00:02:07.995 pdump: explicitly disabled via build config 00:02:07.995 table: explicitly disabled via build config 00:02:07.995 pipeline: explicitly disabled via build config 00:02:07.995 graph: explicitly disabled via build config 00:02:07.995 node: explicitly disabled via build config 00:02:07.995 00:02:07.995 drivers: 00:02:07.995 common/cpt: not in enabled drivers build config 00:02:07.995 common/dpaax: not in enabled drivers build config 00:02:07.995 common/iavf: not in enabled drivers build config 00:02:07.995 common/idpf: not in enabled drivers build config 00:02:07.995 common/ionic: not in enabled drivers build config 00:02:07.995 common/mvep: not in enabled drivers build config 00:02:07.995 common/octeontx: not in enabled drivers build config 00:02:07.995 bus/auxiliary: not in enabled drivers build config 00:02:07.995 bus/cdx: not in enabled drivers build config 00:02:07.995 bus/dpaa: not in enabled drivers build config 00:02:07.995 bus/fslmc: not in enabled drivers build config 00:02:07.995 bus/ifpga: not in enabled drivers build config 00:02:07.995 bus/platform: not in enabled drivers build config 00:02:07.995 bus/uacce: not in enabled drivers build config 00:02:07.995 bus/vmbus: not in enabled drivers build config 00:02:07.995 common/cnxk: not in enabled drivers build config 00:02:07.995 common/mlx5: not in enabled drivers build config 00:02:07.995 common/nfp: not in enabled drivers build config 00:02:07.995 common/nitrox: not in enabled drivers build config 00:02:07.995 common/qat: not in enabled drivers build config 00:02:07.995 common/sfc_efx: not in enabled drivers build config 00:02:07.995 mempool/bucket: not in enabled drivers build config 00:02:07.995 mempool/cnxk: not in enabled drivers build config 00:02:07.995 mempool/dpaa: not in enabled drivers build config 00:02:07.995 mempool/dpaa2: not in enabled drivers build config 00:02:07.995 mempool/octeontx: not in enabled drivers build config 00:02:07.995 mempool/stack: not in enabled drivers build config 00:02:07.995 dma/cnxk: not in enabled drivers build config 00:02:07.995 dma/dpaa: not in enabled drivers build config 00:02:07.995 dma/dpaa2: not in enabled drivers build config 00:02:07.995 dma/hisilicon: not in enabled drivers build config 00:02:07.995 dma/idxd: not in enabled drivers build config 00:02:07.995 dma/ioat: not in enabled drivers build config 00:02:07.995 dma/skeleton: not in enabled drivers build config 00:02:07.995 net/af_packet: not in enabled drivers build config 00:02:07.995 net/af_xdp: not in enabled drivers build config 00:02:07.995 net/ark: not in enabled drivers build config 00:02:07.995 net/atlantic: not in enabled drivers build config 00:02:07.995 net/avp: not in enabled drivers build config 00:02:07.995 net/axgbe: not in enabled drivers build config 00:02:07.995 net/bnx2x: not in enabled drivers build config 00:02:07.995 net/bnxt: not in enabled drivers build config 00:02:07.995 net/bonding: not in enabled drivers build config 00:02:07.995 net/cnxk: not in enabled drivers build config 00:02:07.995 net/cpfl: not in enabled drivers build config 00:02:07.995 net/cxgbe: not in enabled drivers build config 00:02:07.995 net/dpaa: not in enabled drivers build config 00:02:07.995 net/dpaa2: not in enabled drivers build config 00:02:07.995 net/e1000: not in enabled drivers build config 00:02:07.995 net/ena: not in enabled drivers build config 00:02:07.995 net/enetc: not in enabled drivers build config 00:02:07.995 net/enetfec: not in enabled drivers build config 00:02:07.995 net/enic: not in enabled drivers build config 00:02:07.995 net/failsafe: not in enabled drivers build config 00:02:07.995 net/fm10k: not in enabled drivers build config 00:02:07.995 net/gve: not in enabled drivers build config 00:02:07.995 net/hinic: not in enabled drivers build config 00:02:07.995 net/hns3: not in enabled drivers build config 00:02:07.995 net/i40e: not in enabled drivers build config 00:02:07.995 net/iavf: not in enabled drivers build config 00:02:07.995 net/ice: not in enabled drivers build config 00:02:07.995 net/idpf: not in enabled drivers build config 00:02:07.995 net/igc: not in enabled drivers build config 00:02:07.996 net/ionic: not in enabled drivers build config 00:02:07.996 net/ipn3ke: not in enabled drivers build config 00:02:07.996 net/ixgbe: not in enabled drivers build config 00:02:07.996 net/mana: not in enabled drivers build config 00:02:07.996 net/memif: not in enabled drivers build config 00:02:07.996 net/mlx4: not in enabled drivers build config 00:02:07.996 net/mlx5: not in enabled drivers build config 00:02:07.996 net/mvneta: not in enabled drivers build config 00:02:07.996 net/mvpp2: not in enabled drivers build config 00:02:07.996 net/netvsc: not in enabled drivers build config 00:02:07.996 net/nfb: not in enabled drivers build config 00:02:07.996 net/nfp: not in enabled drivers build config 00:02:07.996 net/ngbe: not in enabled drivers build config 00:02:07.996 net/null: not in enabled drivers build config 00:02:07.996 net/octeontx: not in enabled drivers build config 00:02:07.996 net/octeon_ep: not in enabled drivers build config 00:02:07.996 net/pcap: not in enabled drivers build config 00:02:07.996 net/pfe: not in enabled drivers build config 00:02:07.996 net/qede: not in enabled drivers build config 00:02:07.996 net/ring: not in enabled drivers build config 00:02:07.996 net/sfc: not in enabled drivers build config 00:02:07.996 net/softnic: not in enabled drivers build config 00:02:07.996 net/tap: not in enabled drivers build config 00:02:07.996 net/thunderx: not in enabled drivers build config 00:02:07.996 net/txgbe: not in enabled drivers build config 00:02:07.996 net/vdev_netvsc: not in enabled drivers build config 00:02:07.996 net/vhost: not in enabled drivers build config 00:02:07.996 net/virtio: not in enabled drivers build config 00:02:07.996 net/vmxnet3: not in enabled drivers build config 00:02:07.996 raw/*: missing internal dependency, "rawdev" 00:02:07.996 crypto/armv8: not in enabled drivers build config 00:02:07.996 crypto/bcmfs: not in enabled drivers build config 00:02:07.996 crypto/caam_jr: not in enabled drivers build config 00:02:07.996 crypto/ccp: not in enabled drivers build config 00:02:07.996 crypto/cnxk: not in enabled drivers build config 00:02:07.996 crypto/dpaa_sec: not in enabled drivers build config 00:02:07.996 crypto/dpaa2_sec: not in enabled drivers build config 00:02:07.996 crypto/ipsec_mb: not in enabled drivers build config 00:02:07.996 crypto/mlx5: not in enabled drivers build config 00:02:07.996 crypto/mvsam: not in enabled drivers build config 00:02:07.996 crypto/nitrox: not in enabled drivers build config 00:02:07.996 crypto/null: not in enabled drivers build config 00:02:07.996 crypto/octeontx: not in enabled drivers build config 00:02:07.996 crypto/openssl: not in enabled drivers build config 00:02:07.996 crypto/scheduler: not in enabled drivers build config 00:02:07.996 crypto/uadk: not in enabled drivers build config 00:02:07.996 crypto/virtio: not in enabled drivers build config 00:02:07.996 compress/isal: not in enabled drivers build config 00:02:07.996 compress/mlx5: not in enabled drivers build config 00:02:07.996 compress/nitrox: not in enabled drivers build config 00:02:07.996 compress/octeontx: not in enabled drivers build config 00:02:07.996 compress/zlib: not in enabled drivers build config 00:02:07.996 regex/*: missing internal dependency, "regexdev" 00:02:07.996 ml/*: missing internal dependency, "mldev" 00:02:07.996 vdpa/ifc: not in enabled drivers build config 00:02:07.996 vdpa/mlx5: not in enabled drivers build config 00:02:07.996 vdpa/nfp: not in enabled drivers build config 00:02:07.996 vdpa/sfc: not in enabled drivers build config 00:02:07.996 event/*: missing internal dependency, "eventdev" 00:02:07.996 baseband/*: missing internal dependency, "bbdev" 00:02:07.996 gpu/*: missing internal dependency, "gpudev" 00:02:07.996 00:02:07.996 00:02:07.996 Build targets in project: 85 00:02:07.996 00:02:07.996 DPDK 24.03.0 00:02:07.996 00:02:07.996 User defined options 00:02:07.996 buildtype : debug 00:02:07.996 default_library : shared 00:02:07.996 libdir : lib 00:02:07.996 prefix : /home/vagrant/spdk_repo/spdk/dpdk/build 00:02:07.996 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:02:07.996 c_link_args : 00:02:07.996 cpu_instruction_set: native 00:02:07.996 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:02:07.996 disable_libs : acl,argparse,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:02:07.996 enable_docs : false 00:02:07.996 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:02:07.996 enable_kmods : false 00:02:07.996 max_lcores : 128 00:02:07.996 tests : false 00:02:07.996 00:02:07.996 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:07.996 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:02:07.996 [1/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:02:07.996 [2/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:07.996 [3/268] Linking static target lib/librte_kvargs.a 00:02:07.996 [4/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:02:07.996 [5/268] Linking static target lib/librte_log.a 00:02:07.996 [6/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:07.996 [7/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:07.996 [8/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:07.996 [9/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:07.996 [10/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:07.996 [11/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:07.996 [12/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:02:07.996 [13/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:07.996 [14/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:07.996 [15/268] Linking static target lib/librte_telemetry.a 00:02:07.996 [16/268] Linking target lib/librte_log.so.24.1 00:02:07.996 [17/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:07.996 [18/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:07.996 [19/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:08.255 [20/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:08.255 [21/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:02:08.255 [22/268] Linking target lib/librte_kvargs.so.24.1 00:02:08.513 [23/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:08.513 [24/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:02:08.513 [25/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:08.513 [26/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:08.770 [27/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:08.770 [28/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:09.060 [29/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:09.060 [30/268] Linking target lib/librte_telemetry.so.24.1 00:02:09.319 [31/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:09.319 [32/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:09.319 [33/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:09.319 [34/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:02:09.319 [35/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:09.577 [36/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:09.577 [37/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:09.836 [38/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:09.836 [39/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:09.836 [40/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:09.836 [41/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:09.836 [42/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:10.095 [43/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:10.095 [44/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:10.095 [45/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:10.352 [46/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:10.610 [47/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:10.949 [48/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:10.949 [49/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:10.949 [50/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:10.949 [51/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:11.230 [52/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:11.230 [53/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:11.230 [54/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:11.792 [55/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:11.792 [56/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:11.792 [57/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:12.048 [58/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:12.048 [59/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:12.048 [60/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:12.048 [61/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:12.305 [62/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:12.305 [63/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:12.305 [64/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:12.305 [65/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:12.870 [66/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:12.870 [67/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:13.126 [68/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:13.126 [69/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:13.126 [70/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:13.383 [71/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:13.641 [72/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:13.641 [73/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:13.641 [74/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:13.641 [75/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:13.641 [76/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:13.899 [77/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:13.899 [78/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:14.156 [79/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:14.414 [80/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:14.414 [81/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:14.414 [82/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:14.672 [83/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:14.672 [84/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:14.672 [85/268] Linking static target lib/librte_ring.a 00:02:14.930 [86/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:14.930 [87/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:15.188 [88/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:15.188 [89/268] Linking static target lib/librte_eal.a 00:02:15.446 [90/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:15.446 [91/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:15.446 [92/268] Linking static target lib/librte_rcu.a 00:02:15.446 [93/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:15.716 [94/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:15.716 [95/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:15.716 [96/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:02:15.716 [97/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:02:15.995 [98/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:15.995 [99/268] Linking static target lib/librte_mempool.a 00:02:15.995 [100/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:15.995 [101/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:16.252 [102/268] Linking static target lib/librte_mbuf.a 00:02:16.252 [103/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:16.510 [104/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:17.075 [105/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:17.075 [106/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:17.075 [107/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:17.333 [108/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:17.333 [109/268] Linking static target lib/librte_meter.a 00:02:17.333 [110/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:17.333 [111/268] Linking static target lib/librte_net.a 00:02:17.333 [112/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:17.590 [113/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:17.848 [114/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:17.848 [115/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:18.107 [116/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:18.107 [117/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:18.673 [118/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:18.673 [119/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:18.673 [120/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:19.606 [121/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:02:19.606 [122/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:19.864 [123/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:19.864 [124/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:19.864 [125/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:20.121 [126/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:20.122 [127/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:20.122 [128/268] Linking static target lib/librte_pci.a 00:02:20.122 [129/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:20.122 [130/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:20.122 [131/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:20.380 [132/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:20.380 [133/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:20.380 [134/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:20.380 [135/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:20.638 [136/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:20.638 [137/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:02:20.638 [138/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:20.638 [139/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:20.638 [140/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:20.638 [141/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:20.638 [142/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:20.638 [143/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:20.897 [144/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:02:20.897 [145/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:21.159 [146/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:21.424 [147/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:02:21.424 [148/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:21.424 [149/268] Linking static target lib/librte_ethdev.a 00:02:21.682 [150/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:21.940 [151/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:02:21.940 [152/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:21.940 [153/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:21.940 [154/268] Linking static target lib/librte_cmdline.a 00:02:22.198 [155/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:02:22.198 [156/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:02:22.198 [157/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:02:22.198 [158/268] Linking static target lib/librte_timer.a 00:02:22.456 [159/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:22.456 [160/268] Linking static target lib/librte_hash.a 00:02:23.022 [161/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:02:23.022 [162/268] Linking static target lib/librte_compressdev.a 00:02:23.022 [163/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:02:23.022 [164/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:02:23.022 [165/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:02:23.022 [166/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:02:23.022 [167/268] Linking static target lib/librte_dmadev.a 00:02:23.022 [168/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:02:23.280 [169/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:02:23.846 [170/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:24.104 [171/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:02:24.104 [172/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:02:24.104 [173/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:02:24.104 [174/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:24.104 [175/268] Linking static target lib/librte_cryptodev.a 00:02:24.104 [176/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:24.362 [177/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:24.362 [178/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:02:24.363 [179/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:25.294 [180/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:02:25.294 [181/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:02:25.294 [182/268] Linking static target lib/librte_reorder.a 00:02:25.294 [183/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:02:25.294 [184/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:02:25.294 [185/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:02:25.294 [186/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:02:25.868 [187/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:02:25.868 [188/268] Linking static target lib/librte_power.a 00:02:25.868 [189/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:02:26.124 [190/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:02:26.124 [191/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:02:26.124 [192/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:02:26.124 [193/268] Linking static target lib/librte_security.a 00:02:26.380 [194/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:02:26.636 [195/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:02:27.203 [196/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:27.203 [197/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:02:27.470 [198/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:02:27.470 [199/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:02:27.470 [200/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:02:27.470 [201/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:02:27.727 [202/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:02:27.984 [203/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:02:27.984 [204/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:02:28.242 [205/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:28.499 [206/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:02:28.499 [207/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:02:28.499 [208/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:02:28.499 [209/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:02:28.757 [210/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:02:28.757 [211/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:02:28.757 [212/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:02:28.757 [213/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:02:28.757 [214/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:02:28.757 [215/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:02:28.757 [216/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:28.757 [217/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:28.757 [218/268] Linking static target drivers/librte_bus_vdev.a 00:02:29.014 [219/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:02:29.014 [220/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:29.014 [221/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:29.014 [222/268] Linking static target drivers/librte_bus_pci.a 00:02:29.014 [223/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:02:29.014 [224/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:29.014 [225/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:29.014 [226/268] Linking static target drivers/librte_mempool_ring.a 00:02:29.273 [227/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:29.531 [228/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:30.097 [229/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:30.097 [230/268] Linking target lib/librte_eal.so.24.1 00:02:30.355 [231/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:02:30.355 [232/268] Linking target lib/librte_ring.so.24.1 00:02:30.355 [233/268] Linking target lib/librte_timer.so.24.1 00:02:30.355 [234/268] Linking target lib/librte_meter.so.24.1 00:02:30.355 [235/268] Linking target drivers/librte_bus_vdev.so.24.1 00:02:30.355 [236/268] Linking target lib/librte_pci.so.24.1 00:02:30.355 [237/268] Linking target lib/librte_dmadev.so.24.1 00:02:30.355 [238/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:02:30.613 [239/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:02:30.613 [240/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:02:30.613 [241/268] Linking target lib/librte_mempool.so.24.1 00:02:30.613 [242/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:02:30.613 [243/268] Linking target lib/librte_rcu.so.24.1 00:02:30.613 [244/268] Linking target drivers/librte_bus_pci.so.24.1 00:02:30.613 [245/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:02:30.613 [246/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:02:30.613 [247/268] Linking target lib/librte_mbuf.so.24.1 00:02:30.613 [248/268] Linking target drivers/librte_mempool_ring.so.24.1 00:02:30.872 [249/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:02:30.872 [250/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:02:30.872 [251/268] Linking static target lib/librte_vhost.a 00:02:30.872 [252/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:02:30.872 [253/268] Linking target lib/librte_reorder.so.24.1 00:02:30.872 [254/268] Linking target lib/librte_compressdev.so.24.1 00:02:30.872 [255/268] Linking target lib/librte_net.so.24.1 00:02:30.872 [256/268] Linking target lib/librte_cryptodev.so.24.1 00:02:31.130 [257/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:02:31.130 [258/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:02:31.130 [259/268] Linking target lib/librte_hash.so.24.1 00:02:31.130 [260/268] Linking target lib/librte_cmdline.so.24.1 00:02:31.130 [261/268] Linking target lib/librte_security.so.24.1 00:02:31.387 [262/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:31.387 [263/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:02:31.387 [264/268] Linking target lib/librte_ethdev.so.24.1 00:02:31.387 [265/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:02:31.645 [266/268] Linking target lib/librte_power.so.24.1 00:02:32.211 [267/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:32.211 [268/268] Linking target lib/librte_vhost.so.24.1 00:02:32.469 INFO: autodetecting backend as ninja 00:02:32.469 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp -j 10 00:02:33.842 CC lib/ut/ut.o 00:02:33.842 CC lib/ut_mock/mock.o 00:02:33.842 CC lib/log/log.o 00:02:33.842 CC lib/log/log_deprecated.o 00:02:33.842 CC lib/log/log_flags.o 00:02:33.842 LIB libspdk_ut.a 00:02:33.842 SO libspdk_ut.so.2.0 00:02:33.842 LIB libspdk_log.a 00:02:33.842 LIB libspdk_ut_mock.a 00:02:33.842 SYMLINK libspdk_ut.so 00:02:33.842 SO libspdk_ut_mock.so.6.0 00:02:33.842 SO libspdk_log.so.7.0 00:02:33.842 SYMLINK libspdk_ut_mock.so 00:02:33.843 SYMLINK libspdk_log.so 00:02:34.100 CC lib/dma/dma.o 00:02:34.100 CC lib/ioat/ioat.o 00:02:34.100 CXX lib/trace_parser/trace.o 00:02:34.100 CC lib/util/base64.o 00:02:34.100 CC lib/util/bit_array.o 00:02:34.100 CC lib/util/cpuset.o 00:02:34.100 CC lib/util/crc32.o 00:02:34.100 CC lib/util/crc16.o 00:02:34.100 CC lib/util/crc32c.o 00:02:34.358 CC lib/util/crc32_ieee.o 00:02:34.358 CC lib/vfio_user/host/vfio_user_pci.o 00:02:34.358 CC lib/util/crc64.o 00:02:34.358 CC lib/util/dif.o 00:02:34.358 CC lib/util/fd.o 00:02:34.358 CC lib/util/file.o 00:02:34.358 LIB libspdk_dma.a 00:02:34.358 CC lib/util/hexlify.o 00:02:34.358 CC lib/util/iov.o 00:02:34.615 SO libspdk_dma.so.4.0 00:02:34.615 LIB libspdk_ioat.a 00:02:34.615 SO libspdk_ioat.so.7.0 00:02:34.615 CC lib/vfio_user/host/vfio_user.o 00:02:34.615 CC lib/util/math.o 00:02:34.615 SYMLINK libspdk_dma.so 00:02:34.615 CC lib/util/pipe.o 00:02:34.615 SYMLINK libspdk_ioat.so 00:02:34.615 CC lib/util/strerror_tls.o 00:02:34.615 CC lib/util/string.o 00:02:34.615 CC lib/util/uuid.o 00:02:34.615 CC lib/util/fd_group.o 00:02:34.873 CC lib/util/xor.o 00:02:34.873 CC lib/util/zipf.o 00:02:34.873 LIB libspdk_vfio_user.a 00:02:34.873 SO libspdk_vfio_user.so.5.0 00:02:35.130 SYMLINK libspdk_vfio_user.so 00:02:35.130 LIB libspdk_util.a 00:02:35.130 SO libspdk_util.so.9.1 00:02:35.388 SYMLINK libspdk_util.so 00:02:35.645 LIB libspdk_trace_parser.a 00:02:35.645 SO libspdk_trace_parser.so.5.0 00:02:35.645 CC lib/conf/conf.o 00:02:35.645 CC lib/idxd/idxd.o 00:02:35.645 CC lib/idxd/idxd_user.o 00:02:35.645 CC lib/env_dpdk/env.o 00:02:35.645 CC lib/idxd/idxd_kernel.o 00:02:35.645 CC lib/vmd/vmd.o 00:02:35.645 CC lib/json/json_parse.o 00:02:35.645 CC lib/rdma_provider/common.o 00:02:35.645 CC lib/rdma_utils/rdma_utils.o 00:02:35.645 SYMLINK libspdk_trace_parser.so 00:02:35.645 CC lib/rdma_provider/rdma_provider_verbs.o 00:02:35.902 CC lib/json/json_util.o 00:02:35.902 CC lib/vmd/led.o 00:02:35.902 CC lib/env_dpdk/memory.o 00:02:35.902 CC lib/json/json_write.o 00:02:35.902 LIB libspdk_rdma_provider.a 00:02:35.902 SO libspdk_rdma_provider.so.6.0 00:02:36.159 LIB libspdk_conf.a 00:02:36.159 SYMLINK libspdk_rdma_provider.so 00:02:36.159 SO libspdk_conf.so.6.0 00:02:36.159 CC lib/env_dpdk/pci.o 00:02:36.159 CC lib/env_dpdk/init.o 00:02:36.159 LIB libspdk_rdma_utils.a 00:02:36.159 CC lib/env_dpdk/threads.o 00:02:36.159 SO libspdk_rdma_utils.so.1.0 00:02:36.159 SYMLINK libspdk_conf.so 00:02:36.159 CC lib/env_dpdk/pci_ioat.o 00:02:36.159 SYMLINK libspdk_rdma_utils.so 00:02:36.159 CC lib/env_dpdk/pci_virtio.o 00:02:36.416 CC lib/env_dpdk/pci_vmd.o 00:02:36.416 LIB libspdk_json.a 00:02:36.416 SO libspdk_json.so.6.0 00:02:36.416 CC lib/env_dpdk/pci_idxd.o 00:02:36.416 CC lib/env_dpdk/pci_event.o 00:02:36.416 SYMLINK libspdk_json.so 00:02:36.416 CC lib/env_dpdk/sigbus_handler.o 00:02:36.416 CC lib/env_dpdk/pci_dpdk.o 00:02:36.416 LIB libspdk_idxd.a 00:02:36.416 SO libspdk_idxd.so.12.0 00:02:36.673 LIB libspdk_vmd.a 00:02:36.673 CC lib/env_dpdk/pci_dpdk_2207.o 00:02:36.673 SO libspdk_vmd.so.6.0 00:02:36.673 SYMLINK libspdk_idxd.so 00:02:36.673 CC lib/env_dpdk/pci_dpdk_2211.o 00:02:36.673 CC lib/jsonrpc/jsonrpc_server.o 00:02:36.673 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:02:36.673 CC lib/jsonrpc/jsonrpc_client.o 00:02:36.673 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:02:36.673 SYMLINK libspdk_vmd.so 00:02:36.931 LIB libspdk_jsonrpc.a 00:02:37.188 SO libspdk_jsonrpc.so.6.0 00:02:37.188 SYMLINK libspdk_jsonrpc.so 00:02:37.446 CC lib/rpc/rpc.o 00:02:37.703 LIB libspdk_rpc.a 00:02:37.703 SO libspdk_rpc.so.6.0 00:02:37.703 LIB libspdk_env_dpdk.a 00:02:37.703 SYMLINK libspdk_rpc.so 00:02:37.703 SO libspdk_env_dpdk.so.14.1 00:02:37.960 CC lib/keyring/keyring.o 00:02:37.960 CC lib/keyring/keyring_rpc.o 00:02:37.960 CC lib/notify/notify.o 00:02:37.960 CC lib/notify/notify_rpc.o 00:02:37.960 CC lib/trace/trace_flags.o 00:02:37.960 CC lib/trace/trace.o 00:02:37.960 CC lib/trace/trace_rpc.o 00:02:37.960 SYMLINK libspdk_env_dpdk.so 00:02:38.218 LIB libspdk_notify.a 00:02:38.218 SO libspdk_notify.so.6.0 00:02:38.218 LIB libspdk_trace.a 00:02:38.218 SO libspdk_trace.so.10.0 00:02:38.218 SYMLINK libspdk_notify.so 00:02:38.218 LIB libspdk_keyring.a 00:02:38.218 SO libspdk_keyring.so.1.0 00:02:38.476 SYMLINK libspdk_trace.so 00:02:38.476 SYMLINK libspdk_keyring.so 00:02:38.476 CC lib/sock/sock.o 00:02:38.476 CC lib/sock/sock_rpc.o 00:02:38.476 CC lib/thread/iobuf.o 00:02:38.476 CC lib/thread/thread.o 00:02:39.051 LIB libspdk_sock.a 00:02:39.051 SO libspdk_sock.so.10.0 00:02:39.051 SYMLINK libspdk_sock.so 00:02:39.308 CC lib/nvme/nvme_ctrlr_cmd.o 00:02:39.308 CC lib/nvme/nvme_ctrlr.o 00:02:39.308 CC lib/nvme/nvme_fabric.o 00:02:39.308 CC lib/nvme/nvme_ns_cmd.o 00:02:39.308 CC lib/nvme/nvme_ns.o 00:02:39.308 CC lib/nvme/nvme_pcie_common.o 00:02:39.308 CC lib/nvme/nvme_pcie.o 00:02:39.309 CC lib/nvme/nvme_qpair.o 00:02:39.309 CC lib/nvme/nvme.o 00:02:40.704 CC lib/nvme/nvme_quirks.o 00:02:40.704 CC lib/nvme/nvme_transport.o 00:02:40.704 CC lib/nvme/nvme_discovery.o 00:02:40.704 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:02:40.704 LIB libspdk_thread.a 00:02:40.704 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:02:40.704 CC lib/nvme/nvme_tcp.o 00:02:40.704 SO libspdk_thread.so.10.1 00:02:40.704 CC lib/nvme/nvme_opal.o 00:02:40.704 SYMLINK libspdk_thread.so 00:02:40.704 CC lib/nvme/nvme_io_msg.o 00:02:40.962 CC lib/nvme/nvme_poll_group.o 00:02:41.220 CC lib/nvme/nvme_zns.o 00:02:41.220 CC lib/nvme/nvme_stubs.o 00:02:41.220 CC lib/nvme/nvme_auth.o 00:02:41.479 CC lib/accel/accel.o 00:02:41.479 CC lib/accel/accel_rpc.o 00:02:41.736 CC lib/accel/accel_sw.o 00:02:41.736 CC lib/nvme/nvme_cuse.o 00:02:41.994 CC lib/blob/blobstore.o 00:02:41.994 CC lib/blob/request.o 00:02:41.994 CC lib/init/json_config.o 00:02:42.252 CC lib/blob/zeroes.o 00:02:42.252 CC lib/init/subsystem.o 00:02:42.510 CC lib/blob/blob_bs_dev.o 00:02:42.768 CC lib/init/subsystem_rpc.o 00:02:42.768 CC lib/init/rpc.o 00:02:42.768 CC lib/nvme/nvme_rdma.o 00:02:42.768 CC lib/virtio/virtio.o 00:02:42.768 CC lib/virtio/virtio_vhost_user.o 00:02:43.025 CC lib/virtio/virtio_vfio_user.o 00:02:43.025 CC lib/virtio/virtio_pci.o 00:02:43.025 LIB libspdk_init.a 00:02:43.025 SO libspdk_init.so.5.0 00:02:43.025 SYMLINK libspdk_init.so 00:02:43.283 LIB libspdk_accel.a 00:02:43.283 SO libspdk_accel.so.15.1 00:02:43.283 LIB libspdk_virtio.a 00:02:43.283 CC lib/event/app.o 00:02:43.283 CC lib/event/reactor.o 00:02:43.283 CC lib/event/log_rpc.o 00:02:43.283 CC lib/event/app_rpc.o 00:02:43.283 CC lib/event/scheduler_static.o 00:02:43.283 SYMLINK libspdk_accel.so 00:02:43.283 SO libspdk_virtio.so.7.0 00:02:43.542 SYMLINK libspdk_virtio.so 00:02:43.542 CC lib/bdev/bdev.o 00:02:43.542 CC lib/bdev/part.o 00:02:43.542 CC lib/bdev/bdev_rpc.o 00:02:43.542 CC lib/bdev/bdev_zone.o 00:02:43.542 CC lib/bdev/scsi_nvme.o 00:02:44.107 LIB libspdk_event.a 00:02:44.107 SO libspdk_event.so.14.0 00:02:44.107 SYMLINK libspdk_event.so 00:02:44.672 LIB libspdk_nvme.a 00:02:44.930 SO libspdk_nvme.so.13.1 00:02:45.189 SYMLINK libspdk_nvme.so 00:02:45.446 LIB libspdk_blob.a 00:02:45.446 SO libspdk_blob.so.11.0 00:02:45.446 SYMLINK libspdk_blob.so 00:02:45.714 CC lib/lvol/lvol.o 00:02:45.714 CC lib/blobfs/blobfs.o 00:02:45.714 CC lib/blobfs/tree.o 00:02:46.654 LIB libspdk_bdev.a 00:02:46.654 LIB libspdk_blobfs.a 00:02:46.654 SO libspdk_bdev.so.15.1 00:02:46.654 SO libspdk_blobfs.so.10.0 00:02:46.654 SYMLINK libspdk_blobfs.so 00:02:46.654 SYMLINK libspdk_bdev.so 00:02:46.912 LIB libspdk_lvol.a 00:02:46.912 SO libspdk_lvol.so.10.0 00:02:46.912 CC lib/scsi/dev.o 00:02:46.912 CC lib/scsi/lun.o 00:02:46.912 CC lib/scsi/port.o 00:02:46.912 CC lib/scsi/scsi.o 00:02:46.912 CC lib/nbd/nbd.o 00:02:46.912 CC lib/scsi/scsi_bdev.o 00:02:46.912 CC lib/ublk/ublk.o 00:02:46.912 CC lib/nvmf/ctrlr.o 00:02:46.912 CC lib/ftl/ftl_core.o 00:02:46.912 SYMLINK libspdk_lvol.so 00:02:46.912 CC lib/nvmf/ctrlr_discovery.o 00:02:47.169 CC lib/ublk/ublk_rpc.o 00:02:47.169 CC lib/nbd/nbd_rpc.o 00:02:47.426 CC lib/nvmf/ctrlr_bdev.o 00:02:47.427 CC lib/ftl/ftl_init.o 00:02:47.427 CC lib/nvmf/subsystem.o 00:02:47.427 CC lib/nvmf/nvmf.o 00:02:47.427 CC lib/nvmf/nvmf_rpc.o 00:02:47.685 LIB libspdk_nbd.a 00:02:47.685 CC lib/ftl/ftl_layout.o 00:02:47.685 SO libspdk_nbd.so.7.0 00:02:47.685 SYMLINK libspdk_nbd.so 00:02:47.685 CC lib/nvmf/transport.o 00:02:47.685 CC lib/scsi/scsi_pr.o 00:02:47.943 CC lib/nvmf/tcp.o 00:02:47.943 LIB libspdk_ublk.a 00:02:48.203 SO libspdk_ublk.so.3.0 00:02:48.203 SYMLINK libspdk_ublk.so 00:02:48.203 CC lib/nvmf/stubs.o 00:02:48.203 CC lib/ftl/ftl_debug.o 00:02:48.461 CC lib/scsi/scsi_rpc.o 00:02:48.461 CC lib/nvmf/mdns_server.o 00:02:48.461 CC lib/nvmf/rdma.o 00:02:48.719 CC lib/ftl/ftl_io.o 00:02:48.719 CC lib/scsi/task.o 00:02:48.719 CC lib/ftl/ftl_sb.o 00:02:48.719 CC lib/nvmf/auth.o 00:02:48.976 CC lib/ftl/ftl_l2p.o 00:02:48.976 CC lib/ftl/ftl_l2p_flat.o 00:02:48.976 CC lib/ftl/ftl_nv_cache.o 00:02:48.976 CC lib/ftl/ftl_band.o 00:02:48.976 LIB libspdk_scsi.a 00:02:49.234 CC lib/ftl/ftl_band_ops.o 00:02:49.234 CC lib/ftl/ftl_writer.o 00:02:49.234 SO libspdk_scsi.so.9.0 00:02:49.234 CC lib/ftl/ftl_rq.o 00:02:49.234 CC lib/ftl/ftl_reloc.o 00:02:49.492 SYMLINK libspdk_scsi.so 00:02:49.492 CC lib/ftl/ftl_l2p_cache.o 00:02:49.748 CC lib/iscsi/conn.o 00:02:49.749 CC lib/iscsi/init_grp.o 00:02:49.749 CC lib/iscsi/iscsi.o 00:02:49.749 CC lib/ftl/ftl_p2l.o 00:02:50.006 CC lib/vhost/vhost.o 00:02:50.006 CC lib/ftl/mngt/ftl_mngt.o 00:02:50.006 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:02:50.265 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:02:50.523 CC lib/vhost/vhost_rpc.o 00:02:50.523 CC lib/vhost/vhost_scsi.o 00:02:50.523 CC lib/ftl/mngt/ftl_mngt_startup.o 00:02:50.781 CC lib/ftl/mngt/ftl_mngt_md.o 00:02:50.781 CC lib/vhost/vhost_blk.o 00:02:50.781 CC lib/vhost/rte_vhost_user.o 00:02:50.781 CC lib/ftl/mngt/ftl_mngt_misc.o 00:02:50.781 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:02:51.039 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:02:51.315 CC lib/ftl/mngt/ftl_mngt_band.o 00:02:51.315 CC lib/iscsi/md5.o 00:02:51.315 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:02:51.315 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:02:51.572 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:02:51.572 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:02:51.572 CC lib/ftl/utils/ftl_conf.o 00:02:51.829 CC lib/ftl/utils/ftl_md.o 00:02:51.829 CC lib/ftl/utils/ftl_mempool.o 00:02:51.829 CC lib/iscsi/param.o 00:02:51.829 CC lib/iscsi/portal_grp.o 00:02:51.829 CC lib/iscsi/tgt_node.o 00:02:51.829 CC lib/iscsi/iscsi_subsystem.o 00:02:51.829 CC lib/ftl/utils/ftl_bitmap.o 00:02:51.829 LIB libspdk_vhost.a 00:02:52.085 CC lib/iscsi/iscsi_rpc.o 00:02:52.085 SO libspdk_vhost.so.8.0 00:02:52.085 LIB libspdk_nvmf.a 00:02:52.085 CC lib/iscsi/task.o 00:02:52.085 CC lib/ftl/utils/ftl_property.o 00:02:52.085 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:02:52.085 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:02:52.085 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:02:52.085 SO libspdk_nvmf.so.18.1 00:02:52.341 SYMLINK libspdk_vhost.so 00:02:52.341 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:02:52.341 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:02:52.341 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:02:52.341 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:02:52.341 CC lib/ftl/upgrade/ftl_sb_v3.o 00:02:52.599 CC lib/ftl/upgrade/ftl_sb_v5.o 00:02:52.599 CC lib/ftl/nvc/ftl_nvc_dev.o 00:02:52.599 SYMLINK libspdk_nvmf.so 00:02:52.599 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:02:52.599 CC lib/ftl/base/ftl_base_dev.o 00:02:52.599 CC lib/ftl/base/ftl_base_bdev.o 00:02:52.599 CC lib/ftl/ftl_trace.o 00:02:52.599 LIB libspdk_iscsi.a 00:02:52.858 SO libspdk_iscsi.so.8.0 00:02:52.858 SYMLINK libspdk_iscsi.so 00:02:53.115 LIB libspdk_ftl.a 00:02:53.115 SO libspdk_ftl.so.9.0 00:02:53.678 SYMLINK libspdk_ftl.so 00:02:53.934 CC module/env_dpdk/env_dpdk_rpc.o 00:02:54.192 CC module/accel/dsa/accel_dsa.o 00:02:54.192 CC module/blob/bdev/blob_bdev.o 00:02:54.192 CC module/sock/posix/posix.o 00:02:54.192 CC module/scheduler/dynamic/scheduler_dynamic.o 00:02:54.192 CC module/accel/iaa/accel_iaa.o 00:02:54.192 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:02:54.192 CC module/keyring/file/keyring.o 00:02:54.192 CC module/accel/ioat/accel_ioat.o 00:02:54.192 CC module/accel/error/accel_error.o 00:02:54.192 LIB libspdk_env_dpdk_rpc.a 00:02:54.192 SO libspdk_env_dpdk_rpc.so.6.0 00:02:54.192 CC module/accel/iaa/accel_iaa_rpc.o 00:02:54.448 LIB libspdk_scheduler_dpdk_governor.a 00:02:54.448 SYMLINK libspdk_env_dpdk_rpc.so 00:02:54.448 CC module/accel/dsa/accel_dsa_rpc.o 00:02:54.448 CC module/keyring/file/keyring_rpc.o 00:02:54.448 SO libspdk_scheduler_dpdk_governor.so.4.0 00:02:54.448 CC module/accel/ioat/accel_ioat_rpc.o 00:02:54.448 LIB libspdk_scheduler_dynamic.a 00:02:54.448 CC module/accel/error/accel_error_rpc.o 00:02:54.448 LIB libspdk_accel_iaa.a 00:02:54.449 SO libspdk_scheduler_dynamic.so.4.0 00:02:54.449 SYMLINK libspdk_scheduler_dpdk_governor.so 00:02:54.449 LIB libspdk_blob_bdev.a 00:02:54.449 SO libspdk_accel_iaa.so.3.0 00:02:54.449 SO libspdk_blob_bdev.so.11.0 00:02:54.706 SYMLINK libspdk_scheduler_dynamic.so 00:02:54.706 LIB libspdk_accel_dsa.a 00:02:54.706 SYMLINK libspdk_blob_bdev.so 00:02:54.706 LIB libspdk_keyring_file.a 00:02:54.706 SYMLINK libspdk_accel_iaa.so 00:02:54.706 SO libspdk_accel_dsa.so.5.0 00:02:54.706 LIB libspdk_accel_ioat.a 00:02:54.706 SO libspdk_keyring_file.so.1.0 00:02:54.706 LIB libspdk_accel_error.a 00:02:54.706 SO libspdk_accel_ioat.so.6.0 00:02:54.706 SYMLINK libspdk_accel_dsa.so 00:02:54.706 SYMLINK libspdk_keyring_file.so 00:02:54.706 CC module/keyring/linux/keyring.o 00:02:54.706 CC module/scheduler/gscheduler/gscheduler.o 00:02:54.706 SO libspdk_accel_error.so.2.0 00:02:54.706 SYMLINK libspdk_accel_ioat.so 00:02:54.706 CC module/keyring/linux/keyring_rpc.o 00:02:54.963 SYMLINK libspdk_accel_error.so 00:02:54.963 CC module/bdev/delay/vbdev_delay.o 00:02:54.963 CC module/bdev/gpt/gpt.o 00:02:54.963 CC module/bdev/error/vbdev_error.o 00:02:54.963 CC module/bdev/lvol/vbdev_lvol.o 00:02:54.963 CC module/blobfs/bdev/blobfs_bdev.o 00:02:54.963 LIB libspdk_scheduler_gscheduler.a 00:02:54.963 LIB libspdk_keyring_linux.a 00:02:54.963 SO libspdk_scheduler_gscheduler.so.4.0 00:02:54.963 SO libspdk_keyring_linux.so.1.0 00:02:55.221 SYMLINK libspdk_scheduler_gscheduler.so 00:02:55.221 CC module/bdev/delay/vbdev_delay_rpc.o 00:02:55.221 CC module/bdev/malloc/bdev_malloc.o 00:02:55.221 CC module/bdev/null/bdev_null.o 00:02:55.221 SYMLINK libspdk_keyring_linux.so 00:02:55.221 CC module/bdev/malloc/bdev_malloc_rpc.o 00:02:55.221 CC module/bdev/gpt/vbdev_gpt.o 00:02:55.221 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:02:55.478 CC module/bdev/error/vbdev_error_rpc.o 00:02:55.478 LIB libspdk_sock_posix.a 00:02:55.478 SO libspdk_sock_posix.so.6.0 00:02:55.478 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:02:55.478 SYMLINK libspdk_sock_posix.so 00:02:55.736 CC module/bdev/null/bdev_null_rpc.o 00:02:55.736 LIB libspdk_bdev_delay.a 00:02:55.736 LIB libspdk_blobfs_bdev.a 00:02:55.736 LIB libspdk_bdev_error.a 00:02:55.736 SO libspdk_bdev_error.so.6.0 00:02:55.736 SO libspdk_bdev_delay.so.6.0 00:02:55.736 SO libspdk_blobfs_bdev.so.6.0 00:02:55.736 LIB libspdk_bdev_gpt.a 00:02:55.736 LIB libspdk_bdev_malloc.a 00:02:55.736 SO libspdk_bdev_gpt.so.6.0 00:02:55.736 SYMLINK libspdk_bdev_error.so 00:02:55.736 SYMLINK libspdk_blobfs_bdev.so 00:02:55.736 SYMLINK libspdk_bdev_delay.so 00:02:55.736 SO libspdk_bdev_malloc.so.6.0 00:02:55.993 CC module/bdev/nvme/bdev_nvme.o 00:02:55.993 CC module/bdev/passthru/vbdev_passthru.o 00:02:55.993 LIB libspdk_bdev_lvol.a 00:02:55.993 SYMLINK libspdk_bdev_gpt.so 00:02:55.993 SYMLINK libspdk_bdev_malloc.so 00:02:55.993 LIB libspdk_bdev_null.a 00:02:55.993 SO libspdk_bdev_lvol.so.6.0 00:02:55.993 SO libspdk_bdev_null.so.6.0 00:02:55.993 CC module/bdev/raid/bdev_raid.o 00:02:55.993 SYMLINK libspdk_bdev_lvol.so 00:02:55.993 CC module/bdev/split/vbdev_split.o 00:02:55.993 SYMLINK libspdk_bdev_null.so 00:02:55.993 CC module/bdev/split/vbdev_split_rpc.o 00:02:56.249 CC module/bdev/raid/bdev_raid_rpc.o 00:02:56.249 CC module/bdev/zone_block/vbdev_zone_block.o 00:02:56.249 CC module/bdev/ftl/bdev_ftl.o 00:02:56.249 CC module/bdev/aio/bdev_aio.o 00:02:56.249 CC module/bdev/iscsi/bdev_iscsi.o 00:02:56.506 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:02:56.506 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:02:56.506 CC module/bdev/raid/bdev_raid_sb.o 00:02:56.506 LIB libspdk_bdev_split.a 00:02:56.506 CC module/bdev/ftl/bdev_ftl_rpc.o 00:02:56.506 SO libspdk_bdev_split.so.6.0 00:02:56.764 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:02:56.764 SYMLINK libspdk_bdev_split.so 00:02:56.764 CC module/bdev/raid/raid0.o 00:02:56.764 LIB libspdk_bdev_passthru.a 00:02:56.764 CC module/bdev/aio/bdev_aio_rpc.o 00:02:56.764 SO libspdk_bdev_passthru.so.6.0 00:02:56.764 LIB libspdk_bdev_ftl.a 00:02:56.764 LIB libspdk_bdev_zone_block.a 00:02:56.764 LIB libspdk_bdev_iscsi.a 00:02:56.764 SO libspdk_bdev_ftl.so.6.0 00:02:56.764 SYMLINK libspdk_bdev_passthru.so 00:02:57.022 CC module/bdev/nvme/bdev_nvme_rpc.o 00:02:57.022 SO libspdk_bdev_zone_block.so.6.0 00:02:57.022 SO libspdk_bdev_iscsi.so.6.0 00:02:57.022 LIB libspdk_bdev_aio.a 00:02:57.023 SYMLINK libspdk_bdev_ftl.so 00:02:57.023 CC module/bdev/nvme/nvme_rpc.o 00:02:57.023 SO libspdk_bdev_aio.so.6.0 00:02:57.023 CC module/bdev/nvme/bdev_mdns_client.o 00:02:57.023 SYMLINK libspdk_bdev_zone_block.so 00:02:57.023 CC module/bdev/nvme/vbdev_opal.o 00:02:57.023 SYMLINK libspdk_bdev_iscsi.so 00:02:57.023 CC module/bdev/nvme/vbdev_opal_rpc.o 00:02:57.023 CC module/bdev/virtio/bdev_virtio_scsi.o 00:02:57.023 SYMLINK libspdk_bdev_aio.so 00:02:57.023 CC module/bdev/virtio/bdev_virtio_blk.o 00:02:57.023 CC module/bdev/virtio/bdev_virtio_rpc.o 00:02:57.286 CC module/bdev/raid/raid1.o 00:02:57.286 CC module/bdev/raid/concat.o 00:02:57.544 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:02:57.802 LIB libspdk_bdev_virtio.a 00:02:57.802 SO libspdk_bdev_virtio.so.6.0 00:02:57.802 LIB libspdk_bdev_raid.a 00:02:57.802 SO libspdk_bdev_raid.so.6.0 00:02:57.802 SYMLINK libspdk_bdev_virtio.so 00:02:58.061 SYMLINK libspdk_bdev_raid.so 00:02:58.994 LIB libspdk_bdev_nvme.a 00:02:58.994 SO libspdk_bdev_nvme.so.7.0 00:02:58.994 SYMLINK libspdk_bdev_nvme.so 00:02:59.560 CC module/event/subsystems/keyring/keyring.o 00:02:59.560 CC module/event/subsystems/scheduler/scheduler.o 00:02:59.560 CC module/event/subsystems/iobuf/iobuf.o 00:02:59.560 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:02:59.561 CC module/event/subsystems/sock/sock.o 00:02:59.561 CC module/event/subsystems/vmd/vmd.o 00:02:59.561 CC module/event/subsystems/vmd/vmd_rpc.o 00:02:59.561 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:02:59.561 LIB libspdk_event_keyring.a 00:02:59.561 SO libspdk_event_keyring.so.1.0 00:02:59.819 LIB libspdk_event_vhost_blk.a 00:02:59.819 LIB libspdk_event_vmd.a 00:02:59.819 LIB libspdk_event_sock.a 00:02:59.819 LIB libspdk_event_scheduler.a 00:02:59.819 LIB libspdk_event_iobuf.a 00:02:59.819 SYMLINK libspdk_event_keyring.so 00:02:59.819 SO libspdk_event_vhost_blk.so.3.0 00:02:59.819 SO libspdk_event_sock.so.5.0 00:02:59.819 SO libspdk_event_scheduler.so.4.0 00:02:59.819 SO libspdk_event_vmd.so.6.0 00:02:59.819 SO libspdk_event_iobuf.so.3.0 00:02:59.819 SYMLINK libspdk_event_vhost_blk.so 00:02:59.819 SYMLINK libspdk_event_vmd.so 00:02:59.819 SYMLINK libspdk_event_scheduler.so 00:02:59.819 SYMLINK libspdk_event_sock.so 00:02:59.819 SYMLINK libspdk_event_iobuf.so 00:03:00.078 CC module/event/subsystems/accel/accel.o 00:03:00.336 LIB libspdk_event_accel.a 00:03:00.336 SO libspdk_event_accel.so.6.0 00:03:00.336 SYMLINK libspdk_event_accel.so 00:03:00.595 CC module/event/subsystems/bdev/bdev.o 00:03:00.854 LIB libspdk_event_bdev.a 00:03:00.854 SO libspdk_event_bdev.so.6.0 00:03:00.854 SYMLINK libspdk_event_bdev.so 00:03:01.112 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:03:01.112 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:03:01.112 CC module/event/subsystems/nbd/nbd.o 00:03:01.112 CC module/event/subsystems/scsi/scsi.o 00:03:01.112 CC module/event/subsystems/ublk/ublk.o 00:03:01.370 LIB libspdk_event_ublk.a 00:03:01.370 LIB libspdk_event_scsi.a 00:03:01.370 LIB libspdk_event_nbd.a 00:03:01.370 SO libspdk_event_ublk.so.3.0 00:03:01.370 SO libspdk_event_scsi.so.6.0 00:03:01.370 SO libspdk_event_nbd.so.6.0 00:03:01.370 LIB libspdk_event_nvmf.a 00:03:01.370 SYMLINK libspdk_event_ublk.so 00:03:01.370 SYMLINK libspdk_event_nbd.so 00:03:01.370 SYMLINK libspdk_event_scsi.so 00:03:01.370 SO libspdk_event_nvmf.so.6.0 00:03:01.370 SYMLINK libspdk_event_nvmf.so 00:03:01.629 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:03:01.629 CC module/event/subsystems/iscsi/iscsi.o 00:03:01.887 LIB libspdk_event_iscsi.a 00:03:01.887 LIB libspdk_event_vhost_scsi.a 00:03:01.887 SO libspdk_event_vhost_scsi.so.3.0 00:03:01.887 SO libspdk_event_iscsi.so.6.0 00:03:01.887 SYMLINK libspdk_event_iscsi.so 00:03:01.887 SYMLINK libspdk_event_vhost_scsi.so 00:03:01.887 SO libspdk.so.6.0 00:03:02.145 SYMLINK libspdk.so 00:03:02.145 CC app/trace_record/trace_record.o 00:03:02.403 CXX app/trace/trace.o 00:03:02.403 CC app/nvmf_tgt/nvmf_main.o 00:03:02.403 CC examples/interrupt_tgt/interrupt_tgt.o 00:03:02.403 CC app/iscsi_tgt/iscsi_tgt.o 00:03:02.403 CC app/spdk_tgt/spdk_tgt.o 00:03:02.403 CC examples/ioat/perf/perf.o 00:03:02.403 CC examples/util/zipf/zipf.o 00:03:02.403 CC test/thread/poller_perf/poller_perf.o 00:03:02.662 LINK spdk_trace_record 00:03:02.662 LINK poller_perf 00:03:02.662 LINK interrupt_tgt 00:03:02.662 LINK zipf 00:03:02.662 LINK nvmf_tgt 00:03:02.662 LINK iscsi_tgt 00:03:02.920 LINK spdk_tgt 00:03:02.920 LINK ioat_perf 00:03:02.920 LINK spdk_trace 00:03:02.920 CC app/spdk_lspci/spdk_lspci.o 00:03:03.178 LINK spdk_lspci 00:03:03.178 CC examples/ioat/verify/verify.o 00:03:03.178 CC app/spdk_nvme_perf/perf.o 00:03:03.437 TEST_HEADER include/spdk/accel.h 00:03:03.437 CC test/dma/test_dma/test_dma.o 00:03:03.437 TEST_HEADER include/spdk/accel_module.h 00:03:03.437 TEST_HEADER include/spdk/assert.h 00:03:03.437 TEST_HEADER include/spdk/barrier.h 00:03:03.437 TEST_HEADER include/spdk/base64.h 00:03:03.437 CC test/app/bdev_svc/bdev_svc.o 00:03:03.437 TEST_HEADER include/spdk/bdev.h 00:03:03.437 TEST_HEADER include/spdk/bdev_module.h 00:03:03.437 CC app/spdk_nvme_identify/identify.o 00:03:03.437 TEST_HEADER include/spdk/bdev_zone.h 00:03:03.437 TEST_HEADER include/spdk/bit_array.h 00:03:03.437 TEST_HEADER include/spdk/bit_pool.h 00:03:03.437 TEST_HEADER include/spdk/blob_bdev.h 00:03:03.437 TEST_HEADER include/spdk/blobfs_bdev.h 00:03:03.437 TEST_HEADER include/spdk/blobfs.h 00:03:03.437 TEST_HEADER include/spdk/blob.h 00:03:03.437 TEST_HEADER include/spdk/conf.h 00:03:03.437 TEST_HEADER include/spdk/config.h 00:03:03.437 TEST_HEADER include/spdk/cpuset.h 00:03:03.437 TEST_HEADER include/spdk/crc16.h 00:03:03.437 TEST_HEADER include/spdk/crc32.h 00:03:03.437 TEST_HEADER include/spdk/crc64.h 00:03:03.437 TEST_HEADER include/spdk/dif.h 00:03:03.437 TEST_HEADER include/spdk/dma.h 00:03:03.437 TEST_HEADER include/spdk/endian.h 00:03:03.437 TEST_HEADER include/spdk/env_dpdk.h 00:03:03.437 TEST_HEADER include/spdk/env.h 00:03:03.437 TEST_HEADER include/spdk/event.h 00:03:03.437 TEST_HEADER include/spdk/fd_group.h 00:03:03.437 TEST_HEADER include/spdk/fd.h 00:03:03.437 TEST_HEADER include/spdk/file.h 00:03:03.437 TEST_HEADER include/spdk/ftl.h 00:03:03.437 TEST_HEADER include/spdk/gpt_spec.h 00:03:03.437 TEST_HEADER include/spdk/hexlify.h 00:03:03.437 TEST_HEADER include/spdk/histogram_data.h 00:03:03.437 TEST_HEADER include/spdk/idxd.h 00:03:03.437 TEST_HEADER include/spdk/idxd_spec.h 00:03:03.437 TEST_HEADER include/spdk/init.h 00:03:03.437 TEST_HEADER include/spdk/ioat.h 00:03:03.437 TEST_HEADER include/spdk/ioat_spec.h 00:03:03.437 TEST_HEADER include/spdk/iscsi_spec.h 00:03:03.437 TEST_HEADER include/spdk/json.h 00:03:03.437 TEST_HEADER include/spdk/jsonrpc.h 00:03:03.437 TEST_HEADER include/spdk/keyring.h 00:03:03.437 TEST_HEADER include/spdk/keyring_module.h 00:03:03.437 TEST_HEADER include/spdk/likely.h 00:03:03.437 TEST_HEADER include/spdk/log.h 00:03:03.437 TEST_HEADER include/spdk/lvol.h 00:03:03.437 TEST_HEADER include/spdk/memory.h 00:03:03.437 TEST_HEADER include/spdk/mmio.h 00:03:03.437 TEST_HEADER include/spdk/nbd.h 00:03:03.437 TEST_HEADER include/spdk/notify.h 00:03:03.437 TEST_HEADER include/spdk/nvme.h 00:03:03.437 TEST_HEADER include/spdk/nvme_intel.h 00:03:03.437 TEST_HEADER include/spdk/nvme_ocssd.h 00:03:03.437 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:03:03.437 TEST_HEADER include/spdk/nvme_spec.h 00:03:03.437 TEST_HEADER include/spdk/nvme_zns.h 00:03:03.437 TEST_HEADER include/spdk/nvmf_cmd.h 00:03:03.437 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:03:03.437 TEST_HEADER include/spdk/nvmf.h 00:03:03.437 TEST_HEADER include/spdk/nvmf_spec.h 00:03:03.437 TEST_HEADER include/spdk/nvmf_transport.h 00:03:03.437 TEST_HEADER include/spdk/opal.h 00:03:03.437 TEST_HEADER include/spdk/opal_spec.h 00:03:03.437 TEST_HEADER include/spdk/pci_ids.h 00:03:03.437 TEST_HEADER include/spdk/pipe.h 00:03:03.437 TEST_HEADER include/spdk/queue.h 00:03:03.437 TEST_HEADER include/spdk/reduce.h 00:03:03.437 TEST_HEADER include/spdk/rpc.h 00:03:03.437 CC examples/thread/thread/thread_ex.o 00:03:03.437 TEST_HEADER include/spdk/scheduler.h 00:03:03.437 CC test/env/mem_callbacks/mem_callbacks.o 00:03:03.437 TEST_HEADER include/spdk/scsi.h 00:03:03.437 TEST_HEADER include/spdk/scsi_spec.h 00:03:03.437 TEST_HEADER include/spdk/sock.h 00:03:03.437 TEST_HEADER include/spdk/stdinc.h 00:03:03.437 TEST_HEADER include/spdk/string.h 00:03:03.437 TEST_HEADER include/spdk/thread.h 00:03:03.437 TEST_HEADER include/spdk/trace.h 00:03:03.437 TEST_HEADER include/spdk/trace_parser.h 00:03:03.437 TEST_HEADER include/spdk/tree.h 00:03:03.694 TEST_HEADER include/spdk/ublk.h 00:03:03.695 TEST_HEADER include/spdk/util.h 00:03:03.695 TEST_HEADER include/spdk/uuid.h 00:03:03.695 LINK verify 00:03:03.695 TEST_HEADER include/spdk/version.h 00:03:03.695 TEST_HEADER include/spdk/vfio_user_pci.h 00:03:03.695 TEST_HEADER include/spdk/vfio_user_spec.h 00:03:03.695 TEST_HEADER include/spdk/vhost.h 00:03:03.695 TEST_HEADER include/spdk/vmd.h 00:03:03.695 TEST_HEADER include/spdk/xor.h 00:03:03.695 LINK bdev_svc 00:03:03.695 TEST_HEADER include/spdk/zipf.h 00:03:03.695 CXX test/cpp_headers/accel.o 00:03:03.695 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:03:03.695 LINK test_dma 00:03:03.952 CXX test/cpp_headers/accel_module.o 00:03:03.952 LINK thread 00:03:03.952 CXX test/cpp_headers/assert.o 00:03:03.952 CC test/event/event_perf/event_perf.o 00:03:04.210 LINK nvme_fuzz 00:03:04.467 CXX test/cpp_headers/barrier.o 00:03:04.467 LINK event_perf 00:03:04.467 CC test/app/histogram_perf/histogram_perf.o 00:03:04.468 CC app/spdk_nvme_discover/discovery_aer.o 00:03:04.468 LINK spdk_nvme_identify 00:03:04.725 CC examples/sock/hello_world/hello_sock.o 00:03:04.725 LINK mem_callbacks 00:03:04.725 LINK spdk_nvme_perf 00:03:04.725 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:03:04.725 CXX test/cpp_headers/base64.o 00:03:04.725 LINK histogram_perf 00:03:04.982 CC test/event/reactor/reactor.o 00:03:04.982 LINK spdk_nvme_discover 00:03:04.982 CC test/event/reactor_perf/reactor_perf.o 00:03:04.982 CXX test/cpp_headers/bdev.o 00:03:04.982 CC test/env/vtophys/vtophys.o 00:03:04.982 CC app/spdk_top/spdk_top.o 00:03:04.982 LINK hello_sock 00:03:05.240 LINK reactor 00:03:05.240 LINK reactor_perf 00:03:05.240 LINK vtophys 00:03:05.240 CC app/vhost/vhost.o 00:03:05.240 CXX test/cpp_headers/bdev_module.o 00:03:05.240 CC test/app/jsoncat/jsoncat.o 00:03:05.498 CC test/app/stub/stub.o 00:03:05.498 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:03:05.498 LINK jsoncat 00:03:05.498 LINK vhost 00:03:05.498 CC test/env/memory/memory_ut.o 00:03:05.498 CC test/event/app_repeat/app_repeat.o 00:03:05.498 CXX test/cpp_headers/bdev_zone.o 00:03:05.755 CXX test/cpp_headers/bit_array.o 00:03:05.755 LINK env_dpdk_post_init 00:03:05.755 LINK stub 00:03:05.755 LINK app_repeat 00:03:06.012 CXX test/cpp_headers/bit_pool.o 00:03:06.012 CC app/spdk_dd/spdk_dd.o 00:03:06.270 CC app/fio/nvme/fio_plugin.o 00:03:06.270 CC examples/vmd/lsvmd/lsvmd.o 00:03:06.270 CXX test/cpp_headers/blob_bdev.o 00:03:06.270 CC examples/vmd/led/led.o 00:03:06.527 CC test/event/scheduler/scheduler.o 00:03:06.527 LINK lsvmd 00:03:06.527 LINK spdk_top 00:03:06.527 LINK led 00:03:06.527 CXX test/cpp_headers/blobfs_bdev.o 00:03:06.785 LINK spdk_dd 00:03:06.785 LINK scheduler 00:03:07.043 CC test/rpc_client/rpc_client_test.o 00:03:07.043 CC app/fio/bdev/fio_plugin.o 00:03:07.043 LINK iscsi_fuzz 00:03:07.043 LINK memory_ut 00:03:07.043 CC examples/idxd/perf/perf.o 00:03:07.043 CXX test/cpp_headers/blobfs.o 00:03:07.301 LINK rpc_client_test 00:03:07.301 LINK spdk_nvme 00:03:07.301 CXX test/cpp_headers/blob.o 00:03:07.301 CC examples/accel/perf/accel_perf.o 00:03:07.301 CXX test/cpp_headers/conf.o 00:03:07.301 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:03:07.558 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:03:07.558 CC test/env/pci/pci_ut.o 00:03:07.558 LINK spdk_bdev 00:03:07.558 LINK idxd_perf 00:03:07.558 CXX test/cpp_headers/config.o 00:03:07.558 CXX test/cpp_headers/cpuset.o 00:03:07.558 CXX test/cpp_headers/crc16.o 00:03:07.558 CC examples/blob/hello_world/hello_blob.o 00:03:07.816 CC examples/nvme/hello_world/hello_world.o 00:03:07.816 LINK accel_perf 00:03:07.816 LINK vhost_fuzz 00:03:07.816 LINK pci_ut 00:03:07.816 CXX test/cpp_headers/crc32.o 00:03:08.073 LINK hello_world 00:03:08.073 LINK hello_blob 00:03:08.073 CC test/accel/dif/dif.o 00:03:08.073 CC test/blobfs/mkfs/mkfs.o 00:03:08.073 CXX test/cpp_headers/crc64.o 00:03:08.331 CXX test/cpp_headers/dif.o 00:03:08.331 CC test/lvol/esnap/esnap.o 00:03:08.589 CXX test/cpp_headers/dma.o 00:03:08.589 CXX test/cpp_headers/endian.o 00:03:08.589 CC examples/nvme/reconnect/reconnect.o 00:03:08.589 LINK mkfs 00:03:08.589 CC examples/blob/cli/blobcli.o 00:03:08.589 CC test/nvme/aer/aer.o 00:03:08.847 CXX test/cpp_headers/env_dpdk.o 00:03:08.847 CC test/nvme/reset/reset.o 00:03:08.847 LINK dif 00:03:08.847 CC test/nvme/sgl/sgl.o 00:03:09.106 CXX test/cpp_headers/env.o 00:03:09.106 LINK reconnect 00:03:09.106 LINK aer 00:03:09.363 CXX test/cpp_headers/event.o 00:03:09.363 CC test/nvme/e2edp/nvme_dp.o 00:03:09.363 LINK reset 00:03:09.363 LINK sgl 00:03:09.621 CXX test/cpp_headers/fd_group.o 00:03:09.621 CC examples/nvme/nvme_manage/nvme_manage.o 00:03:09.621 CC test/nvme/overhead/overhead.o 00:03:09.621 LINK blobcli 00:03:09.621 CXX test/cpp_headers/fd.o 00:03:09.621 CC test/bdev/bdevio/bdevio.o 00:03:09.621 CC test/nvme/err_injection/err_injection.o 00:03:09.878 CC test/nvme/startup/startup.o 00:03:09.878 LINK nvme_dp 00:03:09.878 CXX test/cpp_headers/file.o 00:03:09.878 CXX test/cpp_headers/ftl.o 00:03:09.878 LINK err_injection 00:03:10.136 LINK startup 00:03:10.136 LINK overhead 00:03:10.136 CC test/nvme/reserve/reserve.o 00:03:10.136 CXX test/cpp_headers/gpt_spec.o 00:03:10.136 CXX test/cpp_headers/hexlify.o 00:03:10.136 LINK bdevio 00:03:10.394 LINK nvme_manage 00:03:10.394 CXX test/cpp_headers/histogram_data.o 00:03:10.394 CC test/nvme/simple_copy/simple_copy.o 00:03:10.394 CXX test/cpp_headers/idxd.o 00:03:10.394 CXX test/cpp_headers/idxd_spec.o 00:03:10.394 LINK reserve 00:03:10.394 CC test/nvme/connect_stress/connect_stress.o 00:03:10.651 CXX test/cpp_headers/init.o 00:03:10.651 CC examples/nvme/arbitration/arbitration.o 00:03:10.651 LINK simple_copy 00:03:10.651 CC examples/bdev/bdevperf/bdevperf.o 00:03:10.651 CXX test/cpp_headers/ioat.o 00:03:10.651 CC test/nvme/boot_partition/boot_partition.o 00:03:10.909 CC examples/bdev/hello_world/hello_bdev.o 00:03:10.909 LINK connect_stress 00:03:10.909 CC test/nvme/compliance/nvme_compliance.o 00:03:11.166 CXX test/cpp_headers/ioat_spec.o 00:03:11.166 CC test/nvme/fused_ordering/fused_ordering.o 00:03:11.166 LINK boot_partition 00:03:11.166 LINK arbitration 00:03:11.166 CC test/nvme/doorbell_aers/doorbell_aers.o 00:03:11.166 LINK hello_bdev 00:03:11.166 LINK nvme_compliance 00:03:11.423 CXX test/cpp_headers/iscsi_spec.o 00:03:11.423 CXX test/cpp_headers/json.o 00:03:11.423 LINK fused_ordering 00:03:11.423 LINK doorbell_aers 00:03:11.423 CC examples/nvme/hotplug/hotplug.o 00:03:11.680 CC test/nvme/fdp/fdp.o 00:03:11.680 CC test/nvme/cuse/cuse.o 00:03:11.680 CXX test/cpp_headers/jsonrpc.o 00:03:11.938 CC examples/nvme/cmb_copy/cmb_copy.o 00:03:11.938 LINK hotplug 00:03:11.938 CC examples/nvme/abort/abort.o 00:03:11.938 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:03:11.938 CXX test/cpp_headers/keyring.o 00:03:12.197 LINK bdevperf 00:03:12.197 CXX test/cpp_headers/keyring_module.o 00:03:12.197 LINK cmb_copy 00:03:12.197 LINK fdp 00:03:12.197 CXX test/cpp_headers/likely.o 00:03:12.197 LINK pmr_persistence 00:03:12.456 CXX test/cpp_headers/log.o 00:03:12.456 CXX test/cpp_headers/lvol.o 00:03:12.456 CXX test/cpp_headers/memory.o 00:03:12.456 CXX test/cpp_headers/mmio.o 00:03:12.456 CXX test/cpp_headers/nbd.o 00:03:12.456 LINK abort 00:03:12.456 CXX test/cpp_headers/notify.o 00:03:12.456 CXX test/cpp_headers/nvme.o 00:03:12.714 CXX test/cpp_headers/nvme_intel.o 00:03:12.714 CXX test/cpp_headers/nvme_ocssd.o 00:03:12.714 CXX test/cpp_headers/nvme_ocssd_spec.o 00:03:12.714 CXX test/cpp_headers/nvme_spec.o 00:03:12.972 CXX test/cpp_headers/nvme_zns.o 00:03:12.972 CXX test/cpp_headers/nvmf_cmd.o 00:03:12.972 CXX test/cpp_headers/nvmf_fc_spec.o 00:03:12.972 CXX test/cpp_headers/nvmf.o 00:03:12.972 CXX test/cpp_headers/nvmf_spec.o 00:03:12.972 CXX test/cpp_headers/nvmf_transport.o 00:03:13.230 CXX test/cpp_headers/opal.o 00:03:13.230 CXX test/cpp_headers/opal_spec.o 00:03:13.230 CXX test/cpp_headers/pci_ids.o 00:03:13.230 CXX test/cpp_headers/pipe.o 00:03:13.230 CC examples/nvmf/nvmf/nvmf.o 00:03:13.230 CXX test/cpp_headers/queue.o 00:03:13.230 CXX test/cpp_headers/reduce.o 00:03:13.230 CXX test/cpp_headers/rpc.o 00:03:13.230 CXX test/cpp_headers/scheduler.o 00:03:13.230 CXX test/cpp_headers/scsi.o 00:03:13.487 CXX test/cpp_headers/scsi_spec.o 00:03:13.487 CXX test/cpp_headers/sock.o 00:03:13.487 CXX test/cpp_headers/stdinc.o 00:03:13.487 CXX test/cpp_headers/string.o 00:03:13.487 CXX test/cpp_headers/thread.o 00:03:13.487 CXX test/cpp_headers/trace.o 00:03:13.744 CXX test/cpp_headers/trace_parser.o 00:03:13.744 CXX test/cpp_headers/tree.o 00:03:13.744 CXX test/cpp_headers/ublk.o 00:03:13.744 CXX test/cpp_headers/util.o 00:03:13.744 CXX test/cpp_headers/uuid.o 00:03:13.744 CXX test/cpp_headers/version.o 00:03:13.744 LINK nvmf 00:03:13.744 CXX test/cpp_headers/vfio_user_pci.o 00:03:13.744 CXX test/cpp_headers/vfio_user_spec.o 00:03:13.744 CXX test/cpp_headers/vhost.o 00:03:13.744 CXX test/cpp_headers/vmd.o 00:03:14.002 LINK cuse 00:03:14.002 CXX test/cpp_headers/xor.o 00:03:14.002 CXX test/cpp_headers/zipf.o 00:03:15.376 LINK esnap 00:03:15.939 00:03:15.939 real 1m27.789s 00:03:15.939 user 9m54.879s 00:03:15.939 sys 1m59.611s 00:03:15.939 11:21:53 make -- common/autotest_common.sh@1124 -- $ xtrace_disable 00:03:15.939 11:21:53 make -- common/autotest_common.sh@10 -- $ set +x 00:03:15.939 ************************************ 00:03:15.939 END TEST make 00:03:15.939 ************************************ 00:03:15.939 11:21:53 -- common/autotest_common.sh@1142 -- $ return 0 00:03:15.940 11:21:53 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:03:15.940 11:21:53 -- pm/common@29 -- $ signal_monitor_resources TERM 00:03:15.940 11:21:53 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:03:15.940 11:21:53 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:15.940 11:21:53 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:03:15.940 11:21:53 -- pm/common@44 -- $ pid=5191 00:03:15.940 11:21:53 -- pm/common@50 -- $ kill -TERM 5191 00:03:15.940 11:21:53 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:15.940 11:21:53 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:03:15.940 11:21:53 -- pm/common@44 -- $ pid=5193 00:03:15.940 11:21:53 -- pm/common@50 -- $ kill -TERM 5193 00:03:15.940 11:21:53 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:03:15.940 11:21:53 -- nvmf/common.sh@7 -- # uname -s 00:03:15.940 11:21:53 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:03:15.940 11:21:53 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:03:15.940 11:21:53 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:03:15.940 11:21:53 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:03:15.940 11:21:53 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:03:15.940 11:21:53 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:03:15.940 11:21:53 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:03:15.940 11:21:53 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:03:15.940 11:21:53 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:03:15.940 11:21:53 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:03:15.940 11:21:53 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:891080d4-f96c-4735-b9e2-e3ce9892e421 00:03:15.940 11:21:53 -- nvmf/common.sh@18 -- # NVME_HOSTID=891080d4-f96c-4735-b9e2-e3ce9892e421 00:03:15.940 11:21:53 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:03:15.940 11:21:53 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:03:15.940 11:21:53 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:03:15.940 11:21:53 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:03:15.940 11:21:53 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:03:15.940 11:21:53 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:03:15.940 11:21:53 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:15.940 11:21:53 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:15.940 11:21:53 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:15.940 11:21:53 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:15.940 11:21:53 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:15.940 11:21:53 -- paths/export.sh@5 -- # export PATH 00:03:15.940 11:21:53 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:15.940 11:21:53 -- nvmf/common.sh@47 -- # : 0 00:03:15.940 11:21:53 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:03:15.940 11:21:53 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:03:15.940 11:21:53 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:03:15.940 11:21:53 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:03:15.940 11:21:53 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:03:15.940 11:21:53 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:03:15.940 11:21:53 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:03:15.940 11:21:53 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:03:15.940 11:21:53 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:03:15.940 11:21:53 -- spdk/autotest.sh@32 -- # uname -s 00:03:15.940 11:21:53 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:03:15.940 11:21:53 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:03:15.940 11:21:53 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:03:15.940 11:21:53 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:03:15.940 11:21:53 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:03:15.940 11:21:53 -- spdk/autotest.sh@44 -- # modprobe nbd 00:03:15.940 11:21:53 -- spdk/autotest.sh@46 -- # type -P udevadm 00:03:15.940 11:21:53 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:03:15.940 11:21:53 -- spdk/autotest.sh@48 -- # udevadm_pid=54756 00:03:15.940 11:21:53 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:03:15.940 11:21:53 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:03:15.940 11:21:53 -- pm/common@17 -- # local monitor 00:03:15.940 11:21:53 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:15.940 11:21:53 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:15.940 11:21:53 -- pm/common@25 -- # sleep 1 00:03:15.940 11:21:53 -- pm/common@21 -- # date +%s 00:03:15.940 11:21:53 -- pm/common@21 -- # date +%s 00:03:15.940 11:21:53 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1721042513 00:03:15.940 11:21:53 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1721042513 00:03:15.940 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1721042513_collect-cpu-load.pm.log 00:03:15.940 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1721042513_collect-vmstat.pm.log 00:03:16.868 11:21:54 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:03:16.868 11:21:54 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:03:16.868 11:21:54 -- common/autotest_common.sh@722 -- # xtrace_disable 00:03:16.868 11:21:54 -- common/autotest_common.sh@10 -- # set +x 00:03:16.868 11:21:54 -- spdk/autotest.sh@59 -- # create_test_list 00:03:16.868 11:21:54 -- common/autotest_common.sh@746 -- # xtrace_disable 00:03:16.868 11:21:54 -- common/autotest_common.sh@10 -- # set +x 00:03:16.868 11:21:54 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:03:17.173 11:21:54 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:03:17.173 11:21:54 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:03:17.173 11:21:54 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:03:17.173 11:21:54 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:03:17.173 11:21:54 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:03:17.173 11:21:54 -- common/autotest_common.sh@1455 -- # uname 00:03:17.173 11:21:54 -- common/autotest_common.sh@1455 -- # '[' Linux = FreeBSD ']' 00:03:17.173 11:21:54 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:03:17.173 11:21:54 -- common/autotest_common.sh@1475 -- # uname 00:03:17.173 11:21:54 -- common/autotest_common.sh@1475 -- # [[ Linux = FreeBSD ]] 00:03:17.173 11:21:54 -- spdk/autotest.sh@71 -- # grep CC_TYPE mk/cc.mk 00:03:17.173 11:21:54 -- spdk/autotest.sh@71 -- # CC_TYPE=CC_TYPE=gcc 00:03:17.173 11:21:54 -- spdk/autotest.sh@72 -- # hash lcov 00:03:17.173 11:21:54 -- spdk/autotest.sh@72 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:03:17.173 11:21:54 -- spdk/autotest.sh@80 -- # export 'LCOV_OPTS= 00:03:17.173 --rc lcov_branch_coverage=1 00:03:17.173 --rc lcov_function_coverage=1 00:03:17.173 --rc genhtml_branch_coverage=1 00:03:17.173 --rc genhtml_function_coverage=1 00:03:17.173 --rc genhtml_legend=1 00:03:17.173 --rc geninfo_all_blocks=1 00:03:17.173 ' 00:03:17.173 11:21:54 -- spdk/autotest.sh@80 -- # LCOV_OPTS=' 00:03:17.173 --rc lcov_branch_coverage=1 00:03:17.173 --rc lcov_function_coverage=1 00:03:17.173 --rc genhtml_branch_coverage=1 00:03:17.173 --rc genhtml_function_coverage=1 00:03:17.173 --rc genhtml_legend=1 00:03:17.173 --rc geninfo_all_blocks=1 00:03:17.173 ' 00:03:17.173 11:21:54 -- spdk/autotest.sh@81 -- # export 'LCOV=lcov 00:03:17.173 --rc lcov_branch_coverage=1 00:03:17.173 --rc lcov_function_coverage=1 00:03:17.173 --rc genhtml_branch_coverage=1 00:03:17.173 --rc genhtml_function_coverage=1 00:03:17.173 --rc genhtml_legend=1 00:03:17.173 --rc geninfo_all_blocks=1 00:03:17.173 --no-external' 00:03:17.173 11:21:54 -- spdk/autotest.sh@81 -- # LCOV='lcov 00:03:17.173 --rc lcov_branch_coverage=1 00:03:17.173 --rc lcov_function_coverage=1 00:03:17.173 --rc genhtml_branch_coverage=1 00:03:17.173 --rc genhtml_function_coverage=1 00:03:17.173 --rc genhtml_legend=1 00:03:17.173 --rc geninfo_all_blocks=1 00:03:17.173 --no-external' 00:03:17.173 11:21:54 -- spdk/autotest.sh@83 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:03:17.173 lcov: LCOV version 1.14 00:03:17.173 11:21:54 -- spdk/autotest.sh@85 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:03:35.275 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:03:35.275 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:03:47.488 /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel.gcno:no functions found 00:03:47.488 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel.gcno 00:03:47.488 /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:03:47.488 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel_module.gcno 00:03:47.488 /home/vagrant/spdk_repo/spdk/test/cpp_headers/assert.gcno:no functions found 00:03:47.488 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/assert.gcno 00:03:47.488 /home/vagrant/spdk_repo/spdk/test/cpp_headers/barrier.gcno:no functions found 00:03:47.488 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/barrier.gcno 00:03:47.488 /home/vagrant/spdk_repo/spdk/test/cpp_headers/base64.gcno:no functions found 00:03:47.488 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/base64.gcno 00:03:47.488 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev.gcno:no functions found 00:03:47.488 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev.gcno 00:03:47.488 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:03:47.488 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_module.gcno 00:03:47.488 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:03:47.488 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_zone.gcno 00:03:47.488 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:03:47.488 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_array.gcno 00:03:47.488 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:03:47.488 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_pool.gcno 00:03:47.488 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:03:47.488 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob_bdev.gcno 00:03:47.488 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:03:47.488 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs_bdev.gcno 00:03:47.488 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:03:47.488 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs.gcno 00:03:47.488 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob.gcno:no functions found 00:03:47.488 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob.gcno 00:03:47.488 /home/vagrant/spdk_repo/spdk/test/cpp_headers/conf.gcno:no functions found 00:03:47.488 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/conf.gcno 00:03:47.488 /home/vagrant/spdk_repo/spdk/test/cpp_headers/config.gcno:no functions found 00:03:47.488 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/config.gcno 00:03:47.488 /home/vagrant/spdk_repo/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:03:47.488 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/cpuset.gcno 00:03:47.488 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc16.gcno:no functions found 00:03:47.488 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc16.gcno 00:03:47.488 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc32.gcno:no functions found 00:03:47.488 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc32.gcno 00:03:47.488 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc64.gcno:no functions found 00:03:47.488 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc64.gcno 00:03:47.488 /home/vagrant/spdk_repo/spdk/test/cpp_headers/dif.gcno:no functions found 00:03:47.488 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/dif.gcno 00:03:47.488 /home/vagrant/spdk_repo/spdk/test/cpp_headers/dma.gcno:no functions found 00:03:47.488 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/dma.gcno 00:03:47.488 /home/vagrant/spdk_repo/spdk/test/cpp_headers/endian.gcno:no functions found 00:03:47.488 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/endian.gcno 00:03:47.488 /home/vagrant/spdk_repo/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:03:47.488 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/env_dpdk.gcno 00:03:47.488 /home/vagrant/spdk_repo/spdk/test/cpp_headers/env.gcno:no functions found 00:03:47.488 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/env.gcno 00:03:47.488 /home/vagrant/spdk_repo/spdk/test/cpp_headers/event.gcno:no functions found 00:03:47.488 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/event.gcno 00:03:47.488 /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:03:47.488 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd_group.gcno 00:03:47.488 /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd.gcno:no functions found 00:03:47.488 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd.gcno 00:03:47.488 /home/vagrant/spdk_repo/spdk/test/cpp_headers/file.gcno:no functions found 00:03:47.488 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/file.gcno 00:03:47.488 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ftl.gcno:no functions found 00:03:47.488 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ftl.gcno 00:03:47.488 /home/vagrant/spdk_repo/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:03:47.488 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/gpt_spec.gcno 00:03:47.488 /home/vagrant/spdk_repo/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:03:47.488 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/hexlify.gcno 00:03:47.488 /home/vagrant/spdk_repo/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:03:47.488 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/histogram_data.gcno 00:03:47.488 /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd.gcno:no functions found 00:03:47.488 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd.gcno 00:03:47.488 /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:03:47.488 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd_spec.gcno 00:03:47.488 /home/vagrant/spdk_repo/spdk/test/cpp_headers/init.gcno:no functions found 00:03:47.488 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/init.gcno 00:03:47.488 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat.gcno:no functions found 00:03:47.488 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat.gcno 00:03:47.488 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:03:47.488 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat_spec.gcno 00:03:47.488 /home/vagrant/spdk_repo/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:03:47.488 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/iscsi_spec.gcno 00:03:47.488 /home/vagrant/spdk_repo/spdk/test/cpp_headers/json.gcno:no functions found 00:03:47.488 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/json.gcno 00:03:47.488 /home/vagrant/spdk_repo/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:03:47.488 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/jsonrpc.gcno 00:03:47.488 /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring.gcno:no functions found 00:03:47.488 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring.gcno 00:03:47.488 /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring_module.gcno:no functions found 00:03:47.488 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring_module.gcno 00:03:47.488 /home/vagrant/spdk_repo/spdk/test/cpp_headers/likely.gcno:no functions found 00:03:47.488 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/likely.gcno 00:03:47.488 /home/vagrant/spdk_repo/spdk/test/cpp_headers/log.gcno:no functions found 00:03:47.488 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/log.gcno 00:03:47.488 /home/vagrant/spdk_repo/spdk/test/cpp_headers/lvol.gcno:no functions found 00:03:47.488 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/lvol.gcno 00:03:47.488 /home/vagrant/spdk_repo/spdk/test/cpp_headers/memory.gcno:no functions found 00:03:47.488 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/memory.gcno 00:03:47.488 /home/vagrant/spdk_repo/spdk/test/cpp_headers/mmio.gcno:no functions found 00:03:47.488 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/mmio.gcno 00:03:47.488 /home/vagrant/spdk_repo/spdk/test/cpp_headers/notify.gcno:no functions found 00:03:47.488 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/notify.gcno 00:03:47.488 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nbd.gcno:no functions found 00:03:47.488 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nbd.gcno 00:03:47.488 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme.gcno:no functions found 00:03:47.488 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme.gcno 00:03:47.488 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:03:47.488 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd.gcno 00:03:47.488 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:03:47.488 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_intel.gcno 00:03:47.488 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:03:47.488 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:03:47.488 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:03:47.488 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_spec.gcno 00:03:47.488 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:03:47.488 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_zns.gcno 00:03:47.488 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:03:47.488 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_cmd.gcno 00:03:47.488 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:03:47.488 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:03:47.488 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:03:47.488 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf.gcno 00:03:47.488 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:03:47.488 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_spec.gcno 00:03:47.488 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:03:47.488 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_transport.gcno 00:03:47.488 /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal.gcno:no functions found 00:03:47.488 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal.gcno 00:03:47.488 /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:03:47.489 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal_spec.gcno 00:03:47.489 /home/vagrant/spdk_repo/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:03:47.489 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/pci_ids.gcno 00:03:47.489 /home/vagrant/spdk_repo/spdk/test/cpp_headers/pipe.gcno:no functions found 00:03:47.489 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/pipe.gcno 00:03:47.489 /home/vagrant/spdk_repo/spdk/test/cpp_headers/queue.gcno:no functions found 00:03:47.489 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/queue.gcno 00:03:47.489 /home/vagrant/spdk_repo/spdk/test/cpp_headers/reduce.gcno:no functions found 00:03:47.489 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/reduce.gcno 00:03:47.489 /home/vagrant/spdk_repo/spdk/test/cpp_headers/rpc.gcno:no functions found 00:03:47.489 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/rpc.gcno 00:03:47.489 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:03:47.489 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scheduler.gcno 00:03:47.489 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi.gcno:no functions found 00:03:47.489 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi.gcno 00:03:47.489 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:03:47.489 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi_spec.gcno 00:03:47.489 /home/vagrant/spdk_repo/spdk/test/cpp_headers/sock.gcno:no functions found 00:03:47.489 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/sock.gcno 00:03:47.489 /home/vagrant/spdk_repo/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:03:47.489 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/stdinc.gcno 00:03:47.489 /home/vagrant/spdk_repo/spdk/test/cpp_headers/string.gcno:no functions found 00:03:47.489 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/string.gcno 00:03:47.489 /home/vagrant/spdk_repo/spdk/test/cpp_headers/thread.gcno:no functions found 00:03:47.489 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/thread.gcno 00:03:47.489 /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace.gcno:no functions found 00:03:47.489 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace.gcno 00:03:47.489 /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:03:47.489 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace_parser.gcno 00:03:47.489 /home/vagrant/spdk_repo/spdk/test/cpp_headers/tree.gcno:no functions found 00:03:47.489 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/tree.gcno 00:03:47.489 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ublk.gcno:no functions found 00:03:47.489 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ublk.gcno 00:03:47.489 /home/vagrant/spdk_repo/spdk/test/cpp_headers/util.gcno:no functions found 00:03:47.489 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/util.gcno 00:03:47.489 /home/vagrant/spdk_repo/spdk/test/cpp_headers/uuid.gcno:no functions found 00:03:47.489 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/uuid.gcno 00:03:47.489 /home/vagrant/spdk_repo/spdk/test/cpp_headers/version.gcno:no functions found 00:03:47.489 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/version.gcno 00:03:47.489 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:03:47.489 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_pci.gcno 00:03:47.489 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:03:47.489 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_spec.gcno 00:03:47.489 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vhost.gcno:no functions found 00:03:47.489 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vhost.gcno 00:03:47.489 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vmd.gcno:no functions found 00:03:47.489 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vmd.gcno 00:03:47.489 /home/vagrant/spdk_repo/spdk/test/cpp_headers/xor.gcno:no functions found 00:03:47.489 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/xor.gcno 00:03:47.489 /home/vagrant/spdk_repo/spdk/test/cpp_headers/zipf.gcno:no functions found 00:03:47.489 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/zipf.gcno 00:03:50.773 11:22:27 -- spdk/autotest.sh@89 -- # timing_enter pre_cleanup 00:03:50.773 11:22:27 -- common/autotest_common.sh@722 -- # xtrace_disable 00:03:50.773 11:22:27 -- common/autotest_common.sh@10 -- # set +x 00:03:50.773 11:22:27 -- spdk/autotest.sh@91 -- # rm -f 00:03:50.773 11:22:27 -- spdk/autotest.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:03:51.031 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:51.031 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:03:51.290 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:03:51.290 11:22:28 -- spdk/autotest.sh@96 -- # get_zoned_devs 00:03:51.290 11:22:28 -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:03:51.290 11:22:28 -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:03:51.290 11:22:28 -- common/autotest_common.sh@1670 -- # local nvme bdf 00:03:51.290 11:22:28 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:03:51.290 11:22:28 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:03:51.290 11:22:28 -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:03:51.290 11:22:28 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:51.290 11:22:28 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:03:51.290 11:22:28 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:03:51.290 11:22:28 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n1 00:03:51.290 11:22:28 -- common/autotest_common.sh@1662 -- # local device=nvme1n1 00:03:51.290 11:22:28 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:03:51.290 11:22:28 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:03:51.290 11:22:28 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:03:51.290 11:22:28 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n2 00:03:51.290 11:22:28 -- common/autotest_common.sh@1662 -- # local device=nvme1n2 00:03:51.290 11:22:28 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:03:51.290 11:22:28 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:03:51.290 11:22:28 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:03:51.290 11:22:28 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n3 00:03:51.290 11:22:28 -- common/autotest_common.sh@1662 -- # local device=nvme1n3 00:03:51.290 11:22:28 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:03:51.290 11:22:28 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:03:51.290 11:22:28 -- spdk/autotest.sh@98 -- # (( 0 > 0 )) 00:03:51.290 11:22:28 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:03:51.290 11:22:28 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:03:51.290 11:22:28 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme0n1 00:03:51.290 11:22:28 -- scripts/common.sh@378 -- # local block=/dev/nvme0n1 pt 00:03:51.290 11:22:28 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:03:51.290 No valid GPT data, bailing 00:03:51.290 11:22:28 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:03:51.290 11:22:28 -- scripts/common.sh@391 -- # pt= 00:03:51.290 11:22:28 -- scripts/common.sh@392 -- # return 1 00:03:51.290 11:22:28 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:03:51.290 1+0 records in 00:03:51.290 1+0 records out 00:03:51.290 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0044534 s, 235 MB/s 00:03:51.290 11:22:28 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:03:51.290 11:22:28 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:03:51.290 11:22:28 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme1n1 00:03:51.290 11:22:28 -- scripts/common.sh@378 -- # local block=/dev/nvme1n1 pt 00:03:51.290 11:22:28 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:03:51.290 No valid GPT data, bailing 00:03:51.290 11:22:28 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:03:51.290 11:22:28 -- scripts/common.sh@391 -- # pt= 00:03:51.290 11:22:28 -- scripts/common.sh@392 -- # return 1 00:03:51.290 11:22:28 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:03:51.290 1+0 records in 00:03:51.290 1+0 records out 00:03:51.290 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00442395 s, 237 MB/s 00:03:51.290 11:22:28 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:03:51.290 11:22:28 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:03:51.290 11:22:28 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme1n2 00:03:51.290 11:22:28 -- scripts/common.sh@378 -- # local block=/dev/nvme1n2 pt 00:03:51.290 11:22:28 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n2 00:03:51.290 No valid GPT data, bailing 00:03:51.290 11:22:28 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:03:51.290 11:22:28 -- scripts/common.sh@391 -- # pt= 00:03:51.290 11:22:28 -- scripts/common.sh@392 -- # return 1 00:03:51.290 11:22:28 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme1n2 bs=1M count=1 00:03:51.549 1+0 records in 00:03:51.549 1+0 records out 00:03:51.549 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0042755 s, 245 MB/s 00:03:51.549 11:22:28 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:03:51.549 11:22:28 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:03:51.549 11:22:28 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme1n3 00:03:51.549 11:22:28 -- scripts/common.sh@378 -- # local block=/dev/nvme1n3 pt 00:03:51.549 11:22:28 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n3 00:03:51.549 No valid GPT data, bailing 00:03:51.549 11:22:28 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:03:51.549 11:22:28 -- scripts/common.sh@391 -- # pt= 00:03:51.549 11:22:28 -- scripts/common.sh@392 -- # return 1 00:03:51.549 11:22:28 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme1n3 bs=1M count=1 00:03:51.549 1+0 records in 00:03:51.549 1+0 records out 00:03:51.549 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00421265 s, 249 MB/s 00:03:51.549 11:22:28 -- spdk/autotest.sh@118 -- # sync 00:03:51.549 11:22:28 -- spdk/autotest.sh@120 -- # xtrace_disable_per_cmd reap_spdk_processes 00:03:51.549 11:22:28 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:03:51.549 11:22:28 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:03:53.449 11:22:30 -- spdk/autotest.sh@124 -- # uname -s 00:03:53.449 11:22:30 -- spdk/autotest.sh@124 -- # '[' Linux = Linux ']' 00:03:53.449 11:22:30 -- spdk/autotest.sh@125 -- # run_test setup.sh /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:03:53.449 11:22:30 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:53.449 11:22:30 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:53.449 11:22:30 -- common/autotest_common.sh@10 -- # set +x 00:03:53.449 ************************************ 00:03:53.449 START TEST setup.sh 00:03:53.449 ************************************ 00:03:53.449 11:22:30 setup.sh -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:03:53.449 * Looking for test storage... 00:03:53.449 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:03:53.449 11:22:30 setup.sh -- setup/test-setup.sh@10 -- # uname -s 00:03:53.449 11:22:30 setup.sh -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:03:53.449 11:22:30 setup.sh -- setup/test-setup.sh@12 -- # run_test acl /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:03:53.449 11:22:30 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:53.450 11:22:30 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:53.450 11:22:30 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:53.450 ************************************ 00:03:53.450 START TEST acl 00:03:53.450 ************************************ 00:03:53.450 11:22:30 setup.sh.acl -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:03:53.450 * Looking for test storage... 00:03:53.450 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:03:53.450 11:22:30 setup.sh.acl -- setup/acl.sh@10 -- # get_zoned_devs 00:03:53.450 11:22:30 setup.sh.acl -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:03:53.450 11:22:30 setup.sh.acl -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:03:53.450 11:22:30 setup.sh.acl -- common/autotest_common.sh@1670 -- # local nvme bdf 00:03:53.450 11:22:30 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:03:53.450 11:22:30 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:03:53.450 11:22:30 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:03:53.450 11:22:30 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:53.450 11:22:30 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:03:53.450 11:22:30 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:03:53.450 11:22:30 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n1 00:03:53.450 11:22:30 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme1n1 00:03:53.450 11:22:30 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:03:53.450 11:22:30 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:03:53.450 11:22:30 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:03:53.450 11:22:30 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n2 00:03:53.450 11:22:30 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme1n2 00:03:53.450 11:22:30 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:03:53.450 11:22:30 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:03:53.450 11:22:30 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:03:53.450 11:22:30 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n3 00:03:53.450 11:22:30 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme1n3 00:03:53.450 11:22:30 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:03:53.450 11:22:30 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:03:53.450 11:22:30 setup.sh.acl -- setup/acl.sh@12 -- # devs=() 00:03:53.450 11:22:30 setup.sh.acl -- setup/acl.sh@12 -- # declare -a devs 00:03:53.450 11:22:30 setup.sh.acl -- setup/acl.sh@13 -- # drivers=() 00:03:53.450 11:22:30 setup.sh.acl -- setup/acl.sh@13 -- # declare -A drivers 00:03:53.450 11:22:30 setup.sh.acl -- setup/acl.sh@51 -- # setup reset 00:03:53.450 11:22:30 setup.sh.acl -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:53.450 11:22:30 setup.sh.acl -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:03:54.015 11:22:31 setup.sh.acl -- setup/acl.sh@52 -- # collect_setup_devs 00:03:54.015 11:22:31 setup.sh.acl -- setup/acl.sh@16 -- # local dev driver 00:03:54.015 11:22:31 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:54.015 11:22:31 setup.sh.acl -- setup/acl.sh@15 -- # setup output status 00:03:54.015 11:22:31 setup.sh.acl -- setup/common.sh@9 -- # [[ output == output ]] 00:03:54.015 11:22:31 setup.sh.acl -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:03:54.949 11:22:32 setup.sh.acl -- setup/acl.sh@19 -- # [[ (1af4 == *:*:*.* ]] 00:03:54.949 11:22:32 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:54.949 11:22:32 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:54.949 Hugepages 00:03:54.949 node hugesize free / total 00:03:54.949 11:22:32 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:03:54.949 11:22:32 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:54.949 11:22:32 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:54.949 00:03:54.949 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:54.949 11:22:32 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:03:54.949 11:22:32 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:54.949 11:22:32 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:54.949 11:22:32 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:03.0 == *:*:*.* ]] 00:03:54.949 11:22:32 setup.sh.acl -- setup/acl.sh@20 -- # [[ virtio-pci == nvme ]] 00:03:54.949 11:22:32 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:54.949 11:22:32 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:54.949 11:22:32 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:10.0 == *:*:*.* ]] 00:03:54.949 11:22:32 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:03:54.949 11:22:32 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\0\.\0* ]] 00:03:54.949 11:22:32 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:03:54.949 11:22:32 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:03:54.949 11:22:32 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:54.949 11:22:32 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:11.0 == *:*:*.* ]] 00:03:54.949 11:22:32 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:03:54.949 11:22:32 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\1\.\0* ]] 00:03:54.949 11:22:32 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:03:54.949 11:22:32 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:03:54.949 11:22:32 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:54.949 11:22:32 setup.sh.acl -- setup/acl.sh@24 -- # (( 2 > 0 )) 00:03:54.949 11:22:32 setup.sh.acl -- setup/acl.sh@54 -- # run_test denied denied 00:03:54.949 11:22:32 setup.sh.acl -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:54.949 11:22:32 setup.sh.acl -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:54.949 11:22:32 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:03:54.949 ************************************ 00:03:54.949 START TEST denied 00:03:54.949 ************************************ 00:03:54.949 11:22:32 setup.sh.acl.denied -- common/autotest_common.sh@1123 -- # denied 00:03:54.949 11:22:32 setup.sh.acl.denied -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:00:10.0' 00:03:54.949 11:22:32 setup.sh.acl.denied -- setup/acl.sh@38 -- # setup output config 00:03:54.949 11:22:32 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ output == output ]] 00:03:54.949 11:22:32 setup.sh.acl.denied -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:03:54.949 11:22:32 setup.sh.acl.denied -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:00:10.0' 00:03:55.885 0000:00:10.0 (1b36 0010): Skipping denied controller at 0000:00:10.0 00:03:55.885 11:22:33 setup.sh.acl.denied -- setup/acl.sh@40 -- # verify 0000:00:10.0 00:03:55.885 11:22:33 setup.sh.acl.denied -- setup/acl.sh@28 -- # local dev driver 00:03:55.885 11:22:33 setup.sh.acl.denied -- setup/acl.sh@30 -- # for dev in "$@" 00:03:55.885 11:22:33 setup.sh.acl.denied -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:10.0 ]] 00:03:55.885 11:22:33 setup.sh.acl.denied -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:10.0/driver 00:03:55.885 11:22:33 setup.sh.acl.denied -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:03:55.885 11:22:33 setup.sh.acl.denied -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:03:55.885 11:22:33 setup.sh.acl.denied -- setup/acl.sh@41 -- # setup reset 00:03:55.885 11:22:33 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:55.885 11:22:33 setup.sh.acl.denied -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:03:56.450 00:03:56.450 real 0m1.412s 00:03:56.450 user 0m0.569s 00:03:56.450 sys 0m0.767s 00:03:56.450 11:22:33 setup.sh.acl.denied -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:56.450 11:22:33 setup.sh.acl.denied -- common/autotest_common.sh@10 -- # set +x 00:03:56.450 ************************************ 00:03:56.450 END TEST denied 00:03:56.450 ************************************ 00:03:56.450 11:22:33 setup.sh.acl -- common/autotest_common.sh@1142 -- # return 0 00:03:56.450 11:22:33 setup.sh.acl -- setup/acl.sh@55 -- # run_test allowed allowed 00:03:56.450 11:22:33 setup.sh.acl -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:56.450 11:22:33 setup.sh.acl -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:56.450 11:22:33 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:03:56.450 ************************************ 00:03:56.450 START TEST allowed 00:03:56.450 ************************************ 00:03:56.450 11:22:33 setup.sh.acl.allowed -- common/autotest_common.sh@1123 -- # allowed 00:03:56.450 11:22:33 setup.sh.acl.allowed -- setup/acl.sh@46 -- # grep -E '0000:00:10.0 .*: nvme -> .*' 00:03:56.450 11:22:33 setup.sh.acl.allowed -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:00:10.0 00:03:56.450 11:22:33 setup.sh.acl.allowed -- setup/acl.sh@45 -- # setup output config 00:03:56.450 11:22:33 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ output == output ]] 00:03:56.450 11:22:33 setup.sh.acl.allowed -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:03:57.383 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:03:57.383 11:22:34 setup.sh.acl.allowed -- setup/acl.sh@47 -- # verify 0000:00:11.0 00:03:57.383 11:22:34 setup.sh.acl.allowed -- setup/acl.sh@28 -- # local dev driver 00:03:57.383 11:22:34 setup.sh.acl.allowed -- setup/acl.sh@30 -- # for dev in "$@" 00:03:57.383 11:22:34 setup.sh.acl.allowed -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:11.0 ]] 00:03:57.383 11:22:34 setup.sh.acl.allowed -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:11.0/driver 00:03:57.383 11:22:34 setup.sh.acl.allowed -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:03:57.383 11:22:34 setup.sh.acl.allowed -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:03:57.384 11:22:34 setup.sh.acl.allowed -- setup/acl.sh@48 -- # setup reset 00:03:57.384 11:22:34 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:57.384 11:22:34 setup.sh.acl.allowed -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:03:57.950 00:03:57.950 real 0m1.435s 00:03:57.950 user 0m0.659s 00:03:57.950 sys 0m0.767s 00:03:57.950 11:22:35 setup.sh.acl.allowed -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:57.950 11:22:35 setup.sh.acl.allowed -- common/autotest_common.sh@10 -- # set +x 00:03:57.950 ************************************ 00:03:57.950 END TEST allowed 00:03:57.950 ************************************ 00:03:57.950 11:22:35 setup.sh.acl -- common/autotest_common.sh@1142 -- # return 0 00:03:57.950 00:03:57.950 real 0m4.595s 00:03:57.950 user 0m2.088s 00:03:57.950 sys 0m2.441s 00:03:57.950 11:22:35 setup.sh.acl -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:57.950 11:22:35 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:03:57.950 ************************************ 00:03:57.950 END TEST acl 00:03:57.950 ************************************ 00:03:57.950 11:22:35 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:03:57.951 11:22:35 setup.sh -- setup/test-setup.sh@13 -- # run_test hugepages /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:03:57.951 11:22:35 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:57.951 11:22:35 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:57.951 11:22:35 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:57.951 ************************************ 00:03:57.951 START TEST hugepages 00:03:57.951 ************************************ 00:03:57.951 11:22:35 setup.sh.hugepages -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:03:57.951 * Looking for test storage... 00:03:57.951 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:03:57.951 11:22:35 setup.sh.hugepages -- setup/hugepages.sh@10 -- # nodes_sys=() 00:03:57.951 11:22:35 setup.sh.hugepages -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:03:57.951 11:22:35 setup.sh.hugepages -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:03:57.951 11:22:35 setup.sh.hugepages -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:03:57.951 11:22:35 setup.sh.hugepages -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:03:57.951 11:22:35 setup.sh.hugepages -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:03:57.951 11:22:35 setup.sh.hugepages -- setup/common.sh@17 -- # local get=Hugepagesize 00:03:57.951 11:22:35 setup.sh.hugepages -- setup/common.sh@18 -- # local node= 00:03:57.951 11:22:35 setup.sh.hugepages -- setup/common.sh@19 -- # local var val 00:03:57.951 11:22:35 setup.sh.hugepages -- setup/common.sh@20 -- # local mem_f mem 00:03:57.951 11:22:35 setup.sh.hugepages -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:57.951 11:22:35 setup.sh.hugepages -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:57.951 11:22:35 setup.sh.hugepages -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:57.951 11:22:35 setup.sh.hugepages -- setup/common.sh@28 -- # mapfile -t mem 00:03:57.951 11:22:35 setup.sh.hugepages -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:57.951 11:22:35 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:57.951 11:22:35 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:57.951 11:22:35 setup.sh.hugepages -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 5906300 kB' 'MemAvailable: 7409500 kB' 'Buffers: 2436 kB' 'Cached: 1714548 kB' 'SwapCached: 0 kB' 'Active: 476772 kB' 'Inactive: 1344296 kB' 'Active(anon): 114572 kB' 'Inactive(anon): 0 kB' 'Active(file): 362200 kB' 'Inactive(file): 1344296 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 300 kB' 'Writeback: 0 kB' 'AnonPages: 105980 kB' 'Mapped: 48600 kB' 'Shmem: 10488 kB' 'KReclaimable: 67284 kB' 'Slab: 140672 kB' 'SReclaimable: 67284 kB' 'SUnreclaim: 73388 kB' 'KernelStack: 6380 kB' 'PageTables: 4264 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 12412436 kB' 'Committed_AS: 342368 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54692 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 149356 kB' 'DirectMap2M: 5093376 kB' 'DirectMap1G: 9437184 kB' 00:03:57.951 11:22:35 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:57.951 11:22:35 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:57.951 11:22:35 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:57.951 11:22:35 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:57.951 11:22:35 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:57.951 11:22:35 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:57.951 11:22:35 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:57.951 11:22:35 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:57.951 11:22:35 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:57.951 11:22:35 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:57.951 11:22:35 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:57.951 11:22:35 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:57.951 11:22:35 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:57.951 11:22:35 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:57.951 11:22:35 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:57.951 11:22:35 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:57.951 11:22:35 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:57.951 11:22:35 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:57.951 11:22:35 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:57.951 11:22:35 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:57.951 11:22:35 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:57.951 11:22:35 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:57.951 11:22:35 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:57.951 11:22:35 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:57.951 11:22:35 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:57.951 11:22:35 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:57.951 11:22:35 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:57.951 11:22:35 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:57.951 11:22:35 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:57.951 11:22:35 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:57.951 11:22:35 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:57.951 11:22:35 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:57.951 11:22:35 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:57.951 11:22:35 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:57.951 11:22:35 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:57.951 11:22:35 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:57.951 11:22:35 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:57.951 11:22:35 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:57.951 11:22:35 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:57.951 11:22:35 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:57.951 11:22:35 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:57.951 11:22:35 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:57.951 11:22:35 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:57.951 11:22:35 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:57.951 11:22:35 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:57.951 11:22:35 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:57.951 11:22:35 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:57.951 11:22:35 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:57.951 11:22:35 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:57.951 11:22:35 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:57.951 11:22:35 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:57.951 11:22:35 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:57.951 11:22:35 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:57.951 11:22:35 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:57.951 11:22:35 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:57.951 11:22:35 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:57.951 11:22:35 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:57.951 11:22:35 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:57.951 11:22:35 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:57.951 11:22:35 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:57.951 11:22:35 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:57.951 11:22:35 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:57.951 11:22:35 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:57.951 11:22:35 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:57.951 11:22:35 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:57.951 11:22:35 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:57.951 11:22:35 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:57.951 11:22:35 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:57.951 11:22:35 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:57.951 11:22:35 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:57.951 11:22:35 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:57.951 11:22:35 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:57.951 11:22:35 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:57.951 11:22:35 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:57.951 11:22:35 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:57.951 11:22:35 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:57.951 11:22:35 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:57.951 11:22:35 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:57.951 11:22:35 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:57.951 11:22:35 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:57.951 11:22:35 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:57.951 11:22:35 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:57.951 11:22:35 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:57.951 11:22:35 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:57.951 11:22:35 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:57.951 11:22:35 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:57.951 11:22:35 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:57.951 11:22:35 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:57.951 11:22:35 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:57.951 11:22:35 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:57.951 11:22:35 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:57.951 11:22:35 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:57.951 11:22:35 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:57.951 11:22:35 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:57.951 11:22:35 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:57.951 11:22:35 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:57.951 11:22:35 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:57.951 11:22:35 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:57.951 11:22:35 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:57.951 11:22:35 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:57.951 11:22:35 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:57.951 11:22:35 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:57.951 11:22:35 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:57.951 11:22:35 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:57.951 11:22:35 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:57.951 11:22:35 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:57.952 11:22:35 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:57.952 11:22:35 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:57.952 11:22:35 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:57.952 11:22:35 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:57.952 11:22:35 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:57.952 11:22:35 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:57.952 11:22:35 setup.sh.hugepages -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:57.952 11:22:35 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:57.952 11:22:35 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:57.952 11:22:35 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:57.952 11:22:35 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:57.952 11:22:35 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:57.952 11:22:35 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:57.952 11:22:35 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:57.952 11:22:35 setup.sh.hugepages -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:57.952 11:22:35 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:57.952 11:22:35 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:57.952 11:22:35 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:57.952 11:22:35 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:57.952 11:22:35 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:57.952 11:22:35 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:57.952 11:22:35 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:57.952 11:22:35 setup.sh.hugepages -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:57.952 11:22:35 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:57.952 11:22:35 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:57.952 11:22:35 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:57.952 11:22:35 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:57.952 11:22:35 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:57.952 11:22:35 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:57.952 11:22:35 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:57.952 11:22:35 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:57.952 11:22:35 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:57.952 11:22:35 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:57.952 11:22:35 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:57.952 11:22:35 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:57.952 11:22:35 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:57.952 11:22:35 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:57.952 11:22:35 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:57.952 11:22:35 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:57.952 11:22:35 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:57.952 11:22:35 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:57.952 11:22:35 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:57.952 11:22:35 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:57.952 11:22:35 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:57.952 11:22:35 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:57.952 11:22:35 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:57.952 11:22:35 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:57.952 11:22:35 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:57.952 11:22:35 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:57.952 11:22:35 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:57.952 11:22:35 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:57.952 11:22:35 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:57.952 11:22:35 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:57.952 11:22:35 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:57.952 11:22:35 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:57.952 11:22:35 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:57.952 11:22:35 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:57.952 11:22:35 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:57.952 11:22:35 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:57.952 11:22:35 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:57.952 11:22:35 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:57.952 11:22:35 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:57.952 11:22:35 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:57.952 11:22:35 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:57.952 11:22:35 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:57.952 11:22:35 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:57.952 11:22:35 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:57.952 11:22:35 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:57.952 11:22:35 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:57.952 11:22:35 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:57.952 11:22:35 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:57.952 11:22:35 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:57.952 11:22:35 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:57.952 11:22:35 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:57.952 11:22:35 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:57.952 11:22:35 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:57.952 11:22:35 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:57.952 11:22:35 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:57.952 11:22:35 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:57.952 11:22:35 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:57.952 11:22:35 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:57.952 11:22:35 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:57.952 11:22:35 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:57.952 11:22:35 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:57.952 11:22:35 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:57.952 11:22:35 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:57.952 11:22:35 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:57.952 11:22:35 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:57.952 11:22:35 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:57.952 11:22:35 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:57.952 11:22:35 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:57.952 11:22:35 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:57.952 11:22:35 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:57.952 11:22:35 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:57.952 11:22:35 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:57.952 11:22:35 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:57.952 11:22:35 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:57.952 11:22:35 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:57.952 11:22:35 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:57.952 11:22:35 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:57.952 11:22:35 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:57.952 11:22:35 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:57.952 11:22:35 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:57.952 11:22:35 setup.sh.hugepages -- setup/common.sh@33 -- # echo 2048 00:03:57.952 11:22:35 setup.sh.hugepages -- setup/common.sh@33 -- # return 0 00:03:57.952 11:22:35 setup.sh.hugepages -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:03:57.952 11:22:35 setup.sh.hugepages -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:03:57.952 11:22:35 setup.sh.hugepages -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:03:57.952 11:22:35 setup.sh.hugepages -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:03:57.952 11:22:35 setup.sh.hugepages -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:03:57.952 11:22:35 setup.sh.hugepages -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:03:57.952 11:22:35 setup.sh.hugepages -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:03:57.952 11:22:35 setup.sh.hugepages -- setup/hugepages.sh@207 -- # get_nodes 00:03:57.952 11:22:35 setup.sh.hugepages -- setup/hugepages.sh@27 -- # local node 00:03:57.952 11:22:35 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:57.952 11:22:35 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:03:57.952 11:22:35 setup.sh.hugepages -- setup/hugepages.sh@32 -- # no_nodes=1 00:03:57.952 11:22:35 setup.sh.hugepages -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:57.952 11:22:35 setup.sh.hugepages -- setup/hugepages.sh@208 -- # clear_hp 00:03:57.952 11:22:35 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:03:57.952 11:22:35 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:57.952 11:22:35 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:57.952 11:22:35 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:57.952 11:22:35 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:57.952 11:22:35 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:58.213 11:22:35 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:03:58.213 11:22:35 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:03:58.213 11:22:35 setup.sh.hugepages -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:03:58.213 11:22:35 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:58.213 11:22:35 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:58.213 11:22:35 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:58.213 ************************************ 00:03:58.213 START TEST default_setup 00:03:58.213 ************************************ 00:03:58.213 11:22:35 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1123 -- # default_setup 00:03:58.213 11:22:35 setup.sh.hugepages.default_setup -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:03:58.213 11:22:35 setup.sh.hugepages.default_setup -- setup/hugepages.sh@49 -- # local size=2097152 00:03:58.213 11:22:35 setup.sh.hugepages.default_setup -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:03:58.213 11:22:35 setup.sh.hugepages.default_setup -- setup/hugepages.sh@51 -- # shift 00:03:58.213 11:22:35 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # node_ids=('0') 00:03:58.213 11:22:35 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # local node_ids 00:03:58.213 11:22:35 setup.sh.hugepages.default_setup -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:58.213 11:22:35 setup.sh.hugepages.default_setup -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:58.213 11:22:35 setup.sh.hugepages.default_setup -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:03:58.213 11:22:35 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:03:58.213 11:22:35 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # local user_nodes 00:03:58.213 11:22:35 setup.sh.hugepages.default_setup -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:58.213 11:22:35 setup.sh.hugepages.default_setup -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:03:58.213 11:22:35 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:58.213 11:22:35 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:58.213 11:22:35 setup.sh.hugepages.default_setup -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:03:58.213 11:22:35 setup.sh.hugepages.default_setup -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:58.213 11:22:35 setup.sh.hugepages.default_setup -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:03:58.213 11:22:35 setup.sh.hugepages.default_setup -- setup/hugepages.sh@73 -- # return 0 00:03:58.213 11:22:35 setup.sh.hugepages.default_setup -- setup/hugepages.sh@137 -- # setup output 00:03:58.213 11:22:35 setup.sh.hugepages.default_setup -- setup/common.sh@9 -- # [[ output == output ]] 00:03:58.213 11:22:35 setup.sh.hugepages.default_setup -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:03:58.779 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:58.779 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:03:58.779 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:03:59.043 11:22:36 setup.sh.hugepages.default_setup -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:03:59.043 11:22:36 setup.sh.hugepages.default_setup -- setup/hugepages.sh@89 -- # local node 00:03:59.043 11:22:36 setup.sh.hugepages.default_setup -- setup/hugepages.sh@90 -- # local sorted_t 00:03:59.043 11:22:36 setup.sh.hugepages.default_setup -- setup/hugepages.sh@91 -- # local sorted_s 00:03:59.043 11:22:36 setup.sh.hugepages.default_setup -- setup/hugepages.sh@92 -- # local surp 00:03:59.043 11:22:36 setup.sh.hugepages.default_setup -- setup/hugepages.sh@93 -- # local resv 00:03:59.043 11:22:36 setup.sh.hugepages.default_setup -- setup/hugepages.sh@94 -- # local anon 00:03:59.043 11:22:36 setup.sh.hugepages.default_setup -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:59.043 11:22:36 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:59.043 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:59.043 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:59.043 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:59.043 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:59.043 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:59.044 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:59.044 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:59.044 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:59.044 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:59.044 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.044 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.044 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 8001860 kB' 'MemAvailable: 9504852 kB' 'Buffers: 2436 kB' 'Cached: 1714540 kB' 'SwapCached: 0 kB' 'Active: 493624 kB' 'Inactive: 1344308 kB' 'Active(anon): 131424 kB' 'Inactive(anon): 0 kB' 'Active(file): 362200 kB' 'Inactive(file): 1344308 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 192 kB' 'Writeback: 0 kB' 'AnonPages: 122352 kB' 'Mapped: 48728 kB' 'Shmem: 10464 kB' 'KReclaimable: 66848 kB' 'Slab: 140072 kB' 'SReclaimable: 66848 kB' 'SUnreclaim: 73224 kB' 'KernelStack: 6304 kB' 'PageTables: 4204 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 359368 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54660 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 149356 kB' 'DirectMap2M: 5093376 kB' 'DirectMap1G: 9437184 kB' 00:03:59.044 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.044 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.044 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.044 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.044 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.044 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.044 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.044 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.044 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.044 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.044 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.044 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.044 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.044 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.044 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.044 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.044 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.044 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.044 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.044 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.044 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.044 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.044 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.044 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.044 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.044 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.044 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.044 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.044 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.044 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.044 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.044 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.044 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.044 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.044 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.044 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.044 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.044 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.044 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.044 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.044 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.044 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.044 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.044 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.044 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.044 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.044 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.044 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.044 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.044 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.044 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.044 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.044 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.044 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.044 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.044 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.044 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.044 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.044 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.044 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.044 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.044 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.044 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.044 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.044 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.044 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.044 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.044 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.044 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.044 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.044 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.044 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.044 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.044 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.044 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.044 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.044 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.044 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.044 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.044 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.044 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.044 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.044 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.044 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.044 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.044 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.044 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.044 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.044 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.044 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.044 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.044 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.044 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.044 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.044 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.044 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.044 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.044 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.044 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.044 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.044 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.044 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.044 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.044 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.044 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.044 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.044 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.044 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.044 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.044 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.044 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.044 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.045 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.045 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.045 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.045 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.045 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.045 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.045 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.045 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.045 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.045 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.045 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.045 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.045 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.045 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.045 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.045 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.045 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.045 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.045 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.045 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.045 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.045 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.045 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.045 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.045 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.045 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.045 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.045 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.045 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.045 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.045 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.045 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.045 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.045 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.045 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.045 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.045 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.045 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.045 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.045 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.045 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.045 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.045 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.045 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.045 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.045 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.045 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.045 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.045 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.045 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:59.045 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:59.045 11:22:36 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # anon=0 00:03:59.045 11:22:36 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:59.045 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:59.045 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:59.045 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:59.045 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:59.045 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:59.045 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:59.045 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:59.045 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:59.045 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:59.045 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.045 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.045 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 8001108 kB' 'MemAvailable: 9504100 kB' 'Buffers: 2436 kB' 'Cached: 1714540 kB' 'SwapCached: 0 kB' 'Active: 493456 kB' 'Inactive: 1344308 kB' 'Active(anon): 131256 kB' 'Inactive(anon): 0 kB' 'Active(file): 362200 kB' 'Inactive(file): 1344308 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 192 kB' 'Writeback: 0 kB' 'AnonPages: 122348 kB' 'Mapped: 48600 kB' 'Shmem: 10464 kB' 'KReclaimable: 66848 kB' 'Slab: 140068 kB' 'SReclaimable: 66848 kB' 'SUnreclaim: 73220 kB' 'KernelStack: 6288 kB' 'PageTables: 4096 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 359368 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54644 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 149356 kB' 'DirectMap2M: 5093376 kB' 'DirectMap1G: 9437184 kB' 00:03:59.045 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.045 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.045 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.045 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.045 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.045 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.045 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.045 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.045 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.045 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.045 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.045 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.045 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.045 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.045 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.045 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.045 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.045 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.045 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.045 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.045 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.045 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.045 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.045 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.045 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.045 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.045 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.045 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.045 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.045 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.045 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.045 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.045 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.045 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.045 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.045 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.045 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.045 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.045 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.045 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.045 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.045 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.045 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.045 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.045 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.045 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.045 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.045 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.045 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.045 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.045 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.045 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.045 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.045 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.046 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.046 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.046 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.046 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.046 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.046 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.046 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.046 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.046 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.046 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.046 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.046 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.046 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.046 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.046 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.046 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.046 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.046 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.046 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.046 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.046 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.046 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.046 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.046 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.046 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.046 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.046 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.046 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.046 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.046 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.046 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.046 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.046 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.046 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.046 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.046 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.046 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.046 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.046 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.046 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.046 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.046 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.046 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.046 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.046 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.046 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.046 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.046 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.046 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.046 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.046 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.046 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.046 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.046 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.046 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.046 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.046 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.046 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.046 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.046 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.046 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.046 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.046 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.046 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.046 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.046 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.046 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.046 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.046 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.046 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.046 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.046 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.046 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.046 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.046 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.046 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.046 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.046 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.046 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.046 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.046 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.046 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.046 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.046 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.046 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.046 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.046 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.046 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.046 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.046 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.046 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.046 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.046 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.046 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.046 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.046 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.046 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.046 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.046 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.046 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.046 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.046 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.046 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.046 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.046 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.046 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.046 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.046 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.046 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.046 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.046 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.046 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.046 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.046 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.046 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.046 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.046 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.046 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.046 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.046 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.046 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.046 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.046 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.046 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.046 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.046 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.046 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.046 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.046 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.047 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.047 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.047 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.047 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.047 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.047 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.047 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.047 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.047 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.047 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.047 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.047 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.047 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.047 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.047 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.047 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.047 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.047 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.047 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.047 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.047 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.047 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.047 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:59.047 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:59.047 11:22:36 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # surp=0 00:03:59.047 11:22:36 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:59.047 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:59.047 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:59.047 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:59.047 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:59.047 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:59.047 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:59.047 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:59.047 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:59.047 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:59.047 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.047 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 8001108 kB' 'MemAvailable: 9504100 kB' 'Buffers: 2436 kB' 'Cached: 1714540 kB' 'SwapCached: 0 kB' 'Active: 493336 kB' 'Inactive: 1344308 kB' 'Active(anon): 131136 kB' 'Inactive(anon): 0 kB' 'Active(file): 362200 kB' 'Inactive(file): 1344308 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 192 kB' 'Writeback: 0 kB' 'AnonPages: 122292 kB' 'Mapped: 48600 kB' 'Shmem: 10464 kB' 'KReclaimable: 66848 kB' 'Slab: 140068 kB' 'SReclaimable: 66848 kB' 'SUnreclaim: 73220 kB' 'KernelStack: 6320 kB' 'PageTables: 4196 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 359368 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54660 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 149356 kB' 'DirectMap2M: 5093376 kB' 'DirectMap1G: 9437184 kB' 00:03:59.047 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.047 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.047 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.047 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.047 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.047 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.047 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.047 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.047 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.047 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.047 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.047 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.047 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.047 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.047 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.047 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.047 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.047 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.047 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.047 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.047 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.047 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.047 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.047 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.047 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.047 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.047 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.047 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.047 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.047 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.047 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.047 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.047 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.047 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.047 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.047 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.047 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.047 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.047 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.047 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.047 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.047 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.047 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.047 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.047 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.047 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.047 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.047 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.047 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.047 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.047 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.047 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.047 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.047 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.047 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.047 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.047 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.047 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.047 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.047 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.047 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.047 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.047 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.047 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.047 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.047 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.047 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.047 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.047 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.047 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.047 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.047 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.047 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.047 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.047 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.047 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.047 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.047 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.047 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.047 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.047 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.048 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.048 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.048 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.048 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.048 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.048 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.048 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.048 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.048 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.048 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.048 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.048 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.048 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.048 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.048 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.048 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.048 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.048 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.048 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.048 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.048 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.048 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.048 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.048 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.048 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.048 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.048 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.048 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.048 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.048 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.048 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.048 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.048 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.048 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.048 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.048 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.048 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.048 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.048 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.048 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.048 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.048 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.048 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.048 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.048 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.048 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.048 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.048 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.048 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.048 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.048 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.048 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.048 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.048 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.048 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.048 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.048 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.048 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.048 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.048 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.048 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.048 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.048 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.048 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.048 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.048 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.048 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.048 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.048 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.048 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.048 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.048 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.048 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.048 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.048 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.048 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.048 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.048 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.048 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.048 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.048 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.048 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.048 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.048 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.048 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.048 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.048 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.048 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.048 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.048 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.048 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.048 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.048 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.048 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.048 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.048 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.048 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.048 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.048 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.048 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.048 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.048 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.048 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.048 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.048 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.049 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.049 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.049 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.049 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.049 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.049 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.049 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.049 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.049 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.049 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.049 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.049 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.049 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.049 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.049 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.049 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.049 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:59.049 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:59.049 11:22:36 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # resv=0 00:03:59.049 nr_hugepages=1024 00:03:59.049 11:22:36 setup.sh.hugepages.default_setup -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:59.049 resv_hugepages=0 00:03:59.049 11:22:36 setup.sh.hugepages.default_setup -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:59.049 surplus_hugepages=0 00:03:59.049 11:22:36 setup.sh.hugepages.default_setup -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:59.049 anon_hugepages=0 00:03:59.049 11:22:36 setup.sh.hugepages.default_setup -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:59.049 11:22:36 setup.sh.hugepages.default_setup -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:59.049 11:22:36 setup.sh.hugepages.default_setup -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:59.049 11:22:36 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:59.049 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:59.049 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:59.049 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:59.049 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:59.049 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:59.049 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:59.049 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:59.049 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:59.049 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:59.049 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.049 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 8001108 kB' 'MemAvailable: 9504100 kB' 'Buffers: 2436 kB' 'Cached: 1714540 kB' 'SwapCached: 0 kB' 'Active: 493348 kB' 'Inactive: 1344308 kB' 'Active(anon): 131148 kB' 'Inactive(anon): 0 kB' 'Active(file): 362200 kB' 'Inactive(file): 1344308 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 192 kB' 'Writeback: 0 kB' 'AnonPages: 122260 kB' 'Mapped: 48600 kB' 'Shmem: 10464 kB' 'KReclaimable: 66848 kB' 'Slab: 140068 kB' 'SReclaimable: 66848 kB' 'SUnreclaim: 73220 kB' 'KernelStack: 6304 kB' 'PageTables: 4144 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 359368 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54660 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 149356 kB' 'DirectMap2M: 5093376 kB' 'DirectMap1G: 9437184 kB' 00:03:59.049 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.049 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.049 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.049 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.049 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.049 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.049 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.049 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.049 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.049 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.049 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.049 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.049 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.049 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.049 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.049 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.049 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.049 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.049 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.049 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.049 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.049 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.049 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.049 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.049 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.049 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.049 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.049 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.049 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.049 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.049 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.049 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.049 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.049 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.049 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.049 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.049 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.049 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.049 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.049 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.049 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.049 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.049 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.049 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.049 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.049 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.049 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.049 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.049 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.049 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.049 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.049 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.049 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.049 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.049 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.049 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.049 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.049 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.049 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.049 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.049 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.049 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.049 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.049 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.049 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.049 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.049 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.049 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.049 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.049 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.049 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.049 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.049 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.049 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.049 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.049 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.049 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.049 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.049 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.049 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.050 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.050 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.050 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.050 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.050 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.050 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.050 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.050 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.050 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.050 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.050 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.050 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.050 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.050 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.050 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.050 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.050 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.050 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.050 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.050 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.050 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.050 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.050 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.050 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.050 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.050 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.050 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.050 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.050 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.050 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.050 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.050 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.050 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.050 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.050 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.050 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.050 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.050 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.050 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.050 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.050 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.050 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.050 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.050 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.050 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.050 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.050 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.050 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.050 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.050 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.050 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.050 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.050 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.050 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.050 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.050 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.050 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.050 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.050 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.050 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.050 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.050 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.050 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.050 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.050 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.050 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.050 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.050 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.050 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.050 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.050 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.050 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.050 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.050 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.050 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.050 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.050 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.050 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.050 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.050 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.050 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.050 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.050 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.050 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.050 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.050 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.050 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.050 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.050 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.050 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.050 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.050 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.050 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.050 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.050 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.050 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.050 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.050 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.050 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.050 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.050 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.050 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.050 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.050 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.050 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.050 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.050 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.050 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.050 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.050 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.050 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.050 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.050 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.050 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.050 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 1024 00:03:59.050 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:59.050 11:22:36 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:59.050 11:22:36 setup.sh.hugepages.default_setup -- setup/hugepages.sh@112 -- # get_nodes 00:03:59.050 11:22:36 setup.sh.hugepages.default_setup -- setup/hugepages.sh@27 -- # local node 00:03:59.050 11:22:36 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:59.050 11:22:36 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:59.050 11:22:36 setup.sh.hugepages.default_setup -- setup/hugepages.sh@32 -- # no_nodes=1 00:03:59.050 11:22:36 setup.sh.hugepages.default_setup -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:59.050 11:22:36 setup.sh.hugepages.default_setup -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:59.050 11:22:36 setup.sh.hugepages.default_setup -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:59.050 11:22:36 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:59.050 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:59.050 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node=0 00:03:59.050 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:59.050 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:59.050 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:59.051 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:59.051 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:59.051 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:59.051 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:59.051 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.051 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.051 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 8001108 kB' 'MemUsed: 4240864 kB' 'SwapCached: 0 kB' 'Active: 493024 kB' 'Inactive: 1344308 kB' 'Active(anon): 130824 kB' 'Inactive(anon): 0 kB' 'Active(file): 362200 kB' 'Inactive(file): 1344308 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 192 kB' 'Writeback: 0 kB' 'FilePages: 1716976 kB' 'Mapped: 48600 kB' 'AnonPages: 122196 kB' 'Shmem: 10464 kB' 'KernelStack: 6288 kB' 'PageTables: 4092 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 66848 kB' 'Slab: 140068 kB' 'SReclaimable: 66848 kB' 'SUnreclaim: 73220 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:59.051 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.051 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.051 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.051 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.051 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.051 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.051 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.051 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.051 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.051 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.051 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.051 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.051 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.051 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.051 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.051 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.051 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.051 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.051 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.051 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.051 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.051 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.051 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.051 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.051 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.051 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.051 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.051 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.051 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.051 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.051 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.051 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.051 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.051 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.051 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.051 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.051 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.051 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.051 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.051 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.051 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.051 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.051 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.051 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.051 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.051 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.051 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.051 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.051 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.051 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.051 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.051 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.051 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.051 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.051 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.051 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.051 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.051 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.051 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.051 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.051 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.051 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.051 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.051 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.051 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.051 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.051 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.051 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.051 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.051 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.051 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.051 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.051 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.051 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.051 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.051 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.051 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.051 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.051 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.051 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.051 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.051 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.051 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.051 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.051 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.051 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.051 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.051 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.051 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.051 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.051 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.051 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.051 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.051 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.051 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.051 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.051 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.051 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.051 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.051 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.051 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.051 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.051 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.051 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.051 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.051 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.051 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.051 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.051 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.051 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.051 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.051 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.051 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.051 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.051 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.051 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.052 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.052 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.052 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.052 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.052 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.052 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.052 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.052 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.052 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.052 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.052 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.052 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.052 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.052 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.052 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.052 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.052 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.052 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.052 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.052 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.052 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.052 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.052 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.052 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.052 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.052 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.052 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.052 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.052 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.052 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:59.052 11:22:36 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:59.052 11:22:36 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:59.052 11:22:36 setup.sh.hugepages.default_setup -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:59.052 11:22:36 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:59.052 11:22:36 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:59.052 node0=1024 expecting 1024 00:03:59.052 11:22:36 setup.sh.hugepages.default_setup -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:59.052 11:22:36 setup.sh.hugepages.default_setup -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:59.052 00:03:59.052 real 0m0.996s 00:03:59.052 user 0m0.461s 00:03:59.052 sys 0m0.461s 00:03:59.052 11:22:36 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:59.052 11:22:36 setup.sh.hugepages.default_setup -- common/autotest_common.sh@10 -- # set +x 00:03:59.052 ************************************ 00:03:59.052 END TEST default_setup 00:03:59.052 ************************************ 00:03:59.052 11:22:36 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:03:59.052 11:22:36 setup.sh.hugepages -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:03:59.052 11:22:36 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:59.052 11:22:36 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:59.052 11:22:36 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:59.052 ************************************ 00:03:59.052 START TEST per_node_1G_alloc 00:03:59.052 ************************************ 00:03:59.052 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1123 -- # per_node_1G_alloc 00:03:59.052 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@143 -- # local IFS=, 00:03:59.052 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 00:03:59.052 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:03:59.052 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:03:59.052 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@51 -- # shift 00:03:59.052 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # node_ids=('0') 00:03:59.052 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:03:59.052 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:59.052 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:03:59.052 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:03:59.052 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:03:59.052 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:59.052 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:03:59.052 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:03:59.052 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:59.052 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:59.052 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:03:59.052 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:59.052 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:03:59.052 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@73 -- # return 0 00:03:59.052 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # NRHUGE=512 00:03:59.052 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # HUGENODE=0 00:03:59.052 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # setup output 00:03:59.052 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:59.052 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:03:59.626 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:59.626 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:59.626 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:59.626 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # nr_hugepages=512 00:03:59.626 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:03:59.626 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@89 -- # local node 00:03:59.626 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:59.626 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:59.626 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:59.626 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:59.626 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:59.626 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:59.626 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:59.626 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:59.626 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:59.626 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:59.626 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:59.626 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:59.626 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:59.626 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:59.626 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:59.626 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:59.626 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.626 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.626 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 9052768 kB' 'MemAvailable: 10555768 kB' 'Buffers: 2436 kB' 'Cached: 1714544 kB' 'SwapCached: 0 kB' 'Active: 493704 kB' 'Inactive: 1344316 kB' 'Active(anon): 131504 kB' 'Inactive(anon): 0 kB' 'Active(file): 362200 kB' 'Inactive(file): 1344316 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 200 kB' 'Writeback: 0 kB' 'AnonPages: 122636 kB' 'Mapped: 48712 kB' 'Shmem: 10464 kB' 'KReclaimable: 66844 kB' 'Slab: 140028 kB' 'SReclaimable: 66844 kB' 'SUnreclaim: 73184 kB' 'KernelStack: 6308 kB' 'PageTables: 4204 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985300 kB' 'Committed_AS: 359368 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54740 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 149356 kB' 'DirectMap2M: 5093376 kB' 'DirectMap1G: 9437184 kB' 00:03:59.626 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.626 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.626 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.626 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.626 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.626 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.626 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.626 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.626 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.626 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.626 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.626 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.626 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.626 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.626 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.626 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.626 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.626 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.626 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.626 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.626 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.626 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.626 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.626 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.626 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.626 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.626 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.626 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.626 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.626 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.626 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.626 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.626 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.626 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.626 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.626 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.626 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.626 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.626 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.626 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.626 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.626 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.626 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.626 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.626 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.626 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.627 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.627 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.627 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.627 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.627 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.627 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.627 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.627 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.627 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.627 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.627 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.627 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.627 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.627 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.627 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.627 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.627 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.627 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.627 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.627 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.627 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.627 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.627 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.627 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.627 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.627 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.627 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.627 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.627 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.627 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.627 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.627 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.627 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.627 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.627 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.627 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.627 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.627 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.627 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.627 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.627 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.627 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.627 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.627 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.627 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.627 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.627 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.627 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.627 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.627 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.627 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.627 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.627 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.627 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.627 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.627 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.627 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.627 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.627 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.627 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.627 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.627 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.627 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.627 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.627 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.627 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.627 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.627 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.627 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.627 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.627 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.627 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.627 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.627 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.627 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.627 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.627 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.627 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.627 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.627 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.627 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.627 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.627 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.627 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.627 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.627 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.627 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.627 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.627 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.627 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.627 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.627 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.627 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.627 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.627 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.627 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.627 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.627 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.627 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.627 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.627 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.627 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.627 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.627 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.627 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.627 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.627 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.627 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.627 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.627 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.627 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.627 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.627 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.627 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.627 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.627 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:59.627 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:59.627 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:59.627 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:59.627 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:59.627 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:59.627 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:59.627 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:59.627 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:59.627 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:59.627 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:59.627 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:59.627 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:59.628 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.628 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 9052516 kB' 'MemAvailable: 10555516 kB' 'Buffers: 2436 kB' 'Cached: 1714544 kB' 'SwapCached: 0 kB' 'Active: 493532 kB' 'Inactive: 1344316 kB' 'Active(anon): 131332 kB' 'Inactive(anon): 0 kB' 'Active(file): 362200 kB' 'Inactive(file): 1344316 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 200 kB' 'Writeback: 0 kB' 'AnonPages: 122440 kB' 'Mapped: 48532 kB' 'Shmem: 10464 kB' 'KReclaimable: 66844 kB' 'Slab: 140072 kB' 'SReclaimable: 66844 kB' 'SUnreclaim: 73228 kB' 'KernelStack: 6320 kB' 'PageTables: 4196 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985300 kB' 'Committed_AS: 359368 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54708 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 149356 kB' 'DirectMap2M: 5093376 kB' 'DirectMap1G: 9437184 kB' 00:03:59.628 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.628 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.628 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.628 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.628 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.628 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.628 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.628 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.628 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.628 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.628 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.628 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.628 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.628 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.628 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.628 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.628 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.628 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.628 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.628 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.628 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.628 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.628 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.628 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.628 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.628 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.628 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.628 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.628 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.628 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.628 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.628 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.628 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.628 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.628 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.628 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.628 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.628 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.628 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.628 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.628 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.628 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.628 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.628 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.628 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.628 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.628 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.628 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.628 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.628 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.628 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.628 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.628 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.628 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.628 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.628 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.628 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.628 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.628 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.628 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.628 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.628 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.628 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.628 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.628 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.628 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.628 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.628 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.628 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.628 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.628 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.628 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.628 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.628 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.628 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.628 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.628 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.628 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.628 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.628 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.628 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.628 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.628 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.628 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.628 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.628 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.628 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.628 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.628 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.628 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.628 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.628 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.628 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.628 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.628 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.628 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.628 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.628 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.628 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.628 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.628 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.628 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.628 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.628 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.628 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.628 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.628 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.628 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.628 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.628 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.628 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.628 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.628 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.628 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.629 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.629 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.629 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.629 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.629 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.629 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.629 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.629 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.629 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.629 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.629 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.629 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.629 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.629 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.629 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.629 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.629 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.629 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.629 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.629 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.629 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.629 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.629 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.629 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.629 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.629 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.629 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.629 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.629 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.629 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.629 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.629 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.629 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.629 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.629 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.629 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.629 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.629 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.629 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.629 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.629 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.629 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.629 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.629 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.629 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.629 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.629 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.629 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.629 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.629 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.629 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.629 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.629 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.629 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.629 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.629 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.629 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.629 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.629 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.629 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.629 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.629 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.629 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.629 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.629 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.629 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.629 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.629 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.629 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.629 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.629 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.629 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.629 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.629 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.629 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.629 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.629 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.629 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.629 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.629 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.629 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.629 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.629 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.629 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.629 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.629 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.629 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.629 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.629 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.629 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.629 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.629 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.629 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:59.629 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:59.629 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:59.629 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:59.629 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:59.629 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:59.629 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:59.629 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:59.629 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:59.629 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:59.629 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:59.629 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:59.629 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:59.629 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.629 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.630 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 9052516 kB' 'MemAvailable: 10555516 kB' 'Buffers: 2436 kB' 'Cached: 1714544 kB' 'SwapCached: 0 kB' 'Active: 493384 kB' 'Inactive: 1344316 kB' 'Active(anon): 131184 kB' 'Inactive(anon): 0 kB' 'Active(file): 362200 kB' 'Inactive(file): 1344316 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 204 kB' 'Writeback: 0 kB' 'AnonPages: 122316 kB' 'Mapped: 48604 kB' 'Shmem: 10464 kB' 'KReclaimable: 66844 kB' 'Slab: 140064 kB' 'SReclaimable: 66844 kB' 'SUnreclaim: 73220 kB' 'KernelStack: 6320 kB' 'PageTables: 4196 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985300 kB' 'Committed_AS: 359368 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54676 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 149356 kB' 'DirectMap2M: 5093376 kB' 'DirectMap1G: 9437184 kB' 00:03:59.630 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.630 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.630 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.630 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.630 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.630 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.630 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.630 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.630 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.630 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.630 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.630 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.630 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.630 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.630 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.630 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.630 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.630 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.630 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.630 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.630 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.630 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.630 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.630 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.630 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.630 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.630 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.630 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.630 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.630 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.630 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.630 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.630 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.630 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.630 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.630 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.630 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.630 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.630 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.630 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.630 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.630 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.630 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.630 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.630 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.630 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.630 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.630 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.630 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.630 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.630 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.630 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.630 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.630 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.630 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.630 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.630 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.630 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.630 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.630 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.630 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.630 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.630 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.630 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.630 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.630 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.630 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.630 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.630 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.630 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.630 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.630 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.630 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.630 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.630 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.630 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.630 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.630 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.630 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.630 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.630 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.630 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.630 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.630 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.630 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.630 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.630 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.630 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.630 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.630 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.630 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.630 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.631 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.631 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.631 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.631 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.631 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.631 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.631 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.631 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.631 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.631 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.631 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.631 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.631 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.631 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.631 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.631 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.631 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.631 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.631 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.631 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.631 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.631 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.631 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.631 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.631 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.631 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.631 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.631 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.631 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.631 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.631 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.631 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.631 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.631 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.631 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.631 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.631 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.631 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.631 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.631 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.631 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.631 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.631 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.631 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.631 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.631 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.631 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.631 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.631 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.631 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.631 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.631 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.631 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.631 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.631 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.631 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.631 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.631 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.631 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.631 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.631 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.631 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.631 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.631 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.631 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.631 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.631 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.631 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.631 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.631 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.631 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.631 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.631 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.631 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.631 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.631 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.631 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.631 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.631 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.631 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.631 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.631 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.631 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.631 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.631 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.631 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.631 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.631 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.631 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.631 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.631 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.631 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.631 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.631 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.631 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.631 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.631 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.631 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.631 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.631 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.631 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.631 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.631 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.631 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.631 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.631 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.631 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.631 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.631 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.631 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:59.631 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:59.631 nr_hugepages=512 00:03:59.631 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:59.631 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 00:03:59.631 resv_hugepages=0 00:03:59.631 surplus_hugepages=0 00:03:59.631 anon_hugepages=0 00:03:59.631 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:59.631 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:59.631 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:59.631 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 00:03:59.631 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 00:03:59.631 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:59.631 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:59.631 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:59.631 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:59.631 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:59.631 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:59.631 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:59.631 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:59.632 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:59.632 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:59.632 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.632 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.632 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 9054304 kB' 'MemAvailable: 10557304 kB' 'Buffers: 2436 kB' 'Cached: 1714544 kB' 'SwapCached: 0 kB' 'Active: 493292 kB' 'Inactive: 1344316 kB' 'Active(anon): 131092 kB' 'Inactive(anon): 0 kB' 'Active(file): 362200 kB' 'Inactive(file): 1344316 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 204 kB' 'Writeback: 0 kB' 'AnonPages: 122200 kB' 'Mapped: 48604 kB' 'Shmem: 10464 kB' 'KReclaimable: 66844 kB' 'Slab: 140056 kB' 'SReclaimable: 66844 kB' 'SUnreclaim: 73212 kB' 'KernelStack: 6304 kB' 'PageTables: 4144 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985300 kB' 'Committed_AS: 359368 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54660 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 149356 kB' 'DirectMap2M: 5093376 kB' 'DirectMap1G: 9437184 kB' 00:03:59.632 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.632 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.632 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.632 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.632 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.632 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.632 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.632 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.632 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.632 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.632 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.632 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.632 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.632 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.632 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.632 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.632 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.632 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.632 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.632 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.632 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.632 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.632 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.632 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.632 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.632 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.632 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.632 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.632 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.632 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.632 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.632 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.632 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.632 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.632 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.632 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.632 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.632 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.632 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.632 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.632 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.632 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.632 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.632 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.632 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.632 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.632 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.632 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.632 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.632 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.632 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.632 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.632 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.632 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.632 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.632 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.632 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.632 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.632 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.632 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.632 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.632 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.632 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.632 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.632 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.632 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.632 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.632 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.632 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.632 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.632 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.632 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.632 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.632 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.632 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.632 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.632 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.632 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.632 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.632 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.632 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.632 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.632 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.632 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.632 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.632 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.632 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.632 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.632 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.632 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.632 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.632 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.632 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.632 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.632 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.632 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.632 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.632 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.632 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.633 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.633 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.633 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.633 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.633 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.633 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.633 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.633 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.633 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.633 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.633 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.633 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.633 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.633 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.633 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.633 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.633 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.633 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.633 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.633 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.633 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.633 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.633 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.633 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.633 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.633 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.633 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.633 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.633 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.633 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.633 11:22:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.633 11:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.633 11:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.633 11:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.633 11:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.633 11:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.633 11:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.633 11:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.633 11:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.633 11:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.633 11:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.633 11:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.633 11:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.633 11:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.633 11:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.633 11:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.633 11:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.633 11:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.633 11:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.633 11:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.633 11:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.633 11:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.633 11:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.633 11:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.633 11:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.633 11:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.633 11:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.633 11:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.633 11:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.633 11:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.633 11:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.633 11:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.633 11:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.633 11:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.633 11:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.633 11:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.633 11:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.633 11:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.633 11:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.633 11:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.633 11:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.633 11:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.633 11:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.633 11:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.633 11:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.633 11:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.633 11:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.633 11:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.633 11:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.633 11:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.633 11:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.633 11:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.633 11:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.633 11:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.633 11:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.633 11:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.633 11:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.633 11:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.633 11:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.633 11:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.633 11:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.633 11:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.633 11:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.633 11:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.633 11:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 512 00:03:59.633 11:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:59.633 11:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 00:03:59.633 11:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:59.633 11:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@27 -- # local node 00:03:59.633 11:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:59.633 11:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:59.633 11:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:03:59.633 11:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:59.633 11:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:59.633 11:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:59.633 11:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:59.633 11:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:59.633 11:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=0 00:03:59.633 11:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:59.633 11:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:59.633 11:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:59.633 11:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:59.633 11:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:59.633 11:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:59.633 11:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:59.633 11:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 9054052 kB' 'MemUsed: 3187920 kB' 'SwapCached: 0 kB' 'Active: 493356 kB' 'Inactive: 1344316 kB' 'Active(anon): 131156 kB' 'Inactive(anon): 0 kB' 'Active(file): 362200 kB' 'Inactive(file): 1344316 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 204 kB' 'Writeback: 0 kB' 'FilePages: 1716980 kB' 'Mapped: 48604 kB' 'AnonPages: 122308 kB' 'Shmem: 10464 kB' 'KernelStack: 6320 kB' 'PageTables: 4196 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 66844 kB' 'Slab: 140056 kB' 'SReclaimable: 66844 kB' 'SUnreclaim: 73212 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:59.633 11:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.634 11:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.634 11:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.634 11:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.634 11:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.634 11:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.634 11:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.634 11:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.634 11:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.634 11:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.634 11:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.634 11:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.634 11:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.634 11:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.634 11:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.634 11:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.634 11:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.634 11:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.634 11:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.634 11:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.634 11:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.634 11:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.634 11:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.634 11:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.634 11:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.634 11:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.634 11:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.634 11:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.634 11:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.634 11:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.634 11:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.634 11:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.634 11:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.634 11:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.634 11:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.634 11:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.634 11:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.634 11:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.634 11:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.634 11:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.634 11:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.634 11:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.634 11:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.634 11:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.634 11:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.634 11:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.634 11:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.634 11:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.634 11:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.634 11:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.634 11:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.634 11:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.634 11:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.634 11:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.634 11:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.634 11:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.634 11:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.634 11:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.634 11:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.634 11:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.634 11:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.634 11:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.634 11:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.634 11:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.634 11:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.634 11:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.634 11:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.634 11:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.634 11:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.634 11:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.634 11:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.634 11:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.634 11:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.634 11:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.634 11:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.634 11:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.634 11:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.634 11:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.634 11:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.634 11:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.634 11:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.634 11:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.634 11:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.634 11:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.634 11:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.634 11:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.634 11:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.634 11:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.634 11:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.634 11:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.634 11:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.634 11:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.634 11:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.634 11:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.634 11:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.634 11:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.634 11:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.634 11:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.634 11:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.634 11:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.634 11:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.634 11:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.634 11:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.634 11:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.634 11:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.634 11:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.634 11:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.634 11:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.634 11:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.634 11:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.634 11:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.634 11:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.634 11:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.634 11:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.634 11:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.634 11:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.634 11:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.634 11:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.634 11:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.634 11:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.634 11:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.634 11:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.634 11:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.634 11:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.634 11:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.634 11:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.634 11:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.634 11:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.634 11:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.634 11:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.635 11:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.635 11:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.635 11:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.635 11:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.635 11:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.635 11:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.635 11:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.635 11:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.635 11:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.635 11:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.635 11:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.635 11:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.635 11:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.635 11:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.635 11:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.635 11:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.635 11:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.635 11:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:59.635 11:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:59.635 11:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:59.635 11:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:59.635 11:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:59.635 11:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:59.635 11:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:59.635 node0=512 expecting 512 00:03:59.635 11:22:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:03:59.635 00:03:59.635 real 0m0.571s 00:03:59.635 user 0m0.298s 00:03:59.635 sys 0m0.269s 00:03:59.635 11:22:37 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:59.635 11:22:37 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:59.635 ************************************ 00:03:59.635 END TEST per_node_1G_alloc 00:03:59.635 ************************************ 00:03:59.635 11:22:37 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:03:59.635 11:22:37 setup.sh.hugepages -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:03:59.635 11:22:37 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:59.635 11:22:37 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:59.635 11:22:37 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:59.893 ************************************ 00:03:59.893 START TEST even_2G_alloc 00:03:59.893 ************************************ 00:03:59.893 11:22:37 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1123 -- # even_2G_alloc 00:03:59.893 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:03:59.893 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:03:59.893 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:59.893 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:59.893 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:59.893 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:59.893 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:59.893 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:59.894 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:59.894 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:03:59.894 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:59.894 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:59.894 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:59.894 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:59.894 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:59.894 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1024 00:03:59.894 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 0 00:03:59.894 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 0 00:03:59.894 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:59.894 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:03:59.894 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:03:59.894 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # setup output 00:03:59.894 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:59.894 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:00.157 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:00.157 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:00.157 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:00.157 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:04:00.157 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@89 -- # local node 00:04:00.157 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:00.157 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:00.157 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:00.157 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:00.157 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:00.157 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:00.157 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:00.157 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:00.157 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:00.157 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:00.157 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:00.157 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:00.157 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:00.157 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:00.157 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:00.157 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:00.157 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.157 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.157 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 8008692 kB' 'MemAvailable: 9511692 kB' 'Buffers: 2436 kB' 'Cached: 1714544 kB' 'SwapCached: 0 kB' 'Active: 493336 kB' 'Inactive: 1344316 kB' 'Active(anon): 131136 kB' 'Inactive(anon): 0 kB' 'Active(file): 362200 kB' 'Inactive(file): 1344316 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 212 kB' 'Writeback: 0 kB' 'AnonPages: 122248 kB' 'Mapped: 48732 kB' 'Shmem: 10464 kB' 'KReclaimable: 66844 kB' 'Slab: 140044 kB' 'SReclaimable: 66844 kB' 'SUnreclaim: 73200 kB' 'KernelStack: 6292 kB' 'PageTables: 4180 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 359368 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54676 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 149356 kB' 'DirectMap2M: 5093376 kB' 'DirectMap1G: 9437184 kB' 00:04:00.157 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.157 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.157 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.157 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.157 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.157 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.157 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.157 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.157 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.157 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.157 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.157 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.157 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.157 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.157 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.157 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.157 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.157 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.157 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.157 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.157 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.157 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.157 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.157 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.157 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.157 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.157 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.157 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.157 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.157 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.157 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.157 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.157 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.157 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.157 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.157 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.157 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.157 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.157 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.157 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.157 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.157 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.157 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.157 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.157 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.157 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.157 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.157 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.157 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.157 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.157 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.157 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.157 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.157 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.157 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.157 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.157 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.157 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.157 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.158 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.158 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.158 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.158 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.158 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.158 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.158 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.158 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.158 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.158 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.158 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.158 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.158 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.158 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.158 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.158 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.158 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.158 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.158 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.158 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.158 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.158 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.158 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.158 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.158 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.158 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.158 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.158 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.158 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.158 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.158 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.158 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.158 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.158 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.158 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.158 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.158 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.158 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.158 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.158 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.158 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.158 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.158 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.158 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.158 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.158 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.158 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.158 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.158 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.158 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.158 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.158 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.158 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.158 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.158 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.158 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.158 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.158 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.158 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.158 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.158 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.158 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.158 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.158 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.158 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.158 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.158 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.158 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.158 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.158 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.158 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.158 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.158 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.158 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.158 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.158 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.158 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.158 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.158 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.158 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.158 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.158 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.158 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.158 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.158 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.158 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.158 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.158 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.158 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.158 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.158 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.158 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.158 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.158 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.158 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.158 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.158 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.158 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.158 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.158 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.158 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.158 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.158 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:00.158 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:00.158 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:00.158 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:00.158 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:00.158 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:00.158 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:00.158 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:00.158 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:00.158 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:00.158 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:00.158 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:00.158 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:00.158 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.159 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 8008692 kB' 'MemAvailable: 9511692 kB' 'Buffers: 2436 kB' 'Cached: 1714544 kB' 'SwapCached: 0 kB' 'Active: 493088 kB' 'Inactive: 1344316 kB' 'Active(anon): 130888 kB' 'Inactive(anon): 0 kB' 'Active(file): 362200 kB' 'Inactive(file): 1344316 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 212 kB' 'Writeback: 0 kB' 'AnonPages: 122304 kB' 'Mapped: 48604 kB' 'Shmem: 10464 kB' 'KReclaimable: 66844 kB' 'Slab: 140044 kB' 'SReclaimable: 66844 kB' 'SUnreclaim: 73200 kB' 'KernelStack: 6320 kB' 'PageTables: 4192 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 359368 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54692 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 149356 kB' 'DirectMap2M: 5093376 kB' 'DirectMap1G: 9437184 kB' 00:04:00.159 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.159 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.159 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.159 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.159 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.159 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.159 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.159 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.159 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.159 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.159 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.159 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.159 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.159 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.159 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.159 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.159 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.159 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.159 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.159 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.159 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.159 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.159 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.159 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.159 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.159 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.159 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.159 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.159 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.159 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.159 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.159 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.159 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.159 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.159 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.159 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.159 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.159 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.159 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.159 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.159 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.159 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.159 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.159 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.159 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.159 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.159 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.159 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.159 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.159 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.159 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.159 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.159 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.159 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.159 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.159 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.159 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.159 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.159 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.159 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.159 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.159 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.159 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.159 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.159 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.159 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.159 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.159 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.159 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.159 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.159 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.159 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.159 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.159 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.159 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.159 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.159 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.159 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.159 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.159 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.159 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.159 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.159 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.159 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.159 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.159 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.159 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.159 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.159 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.159 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.159 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.159 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.159 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.159 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.159 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.159 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.159 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.159 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.159 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.159 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.159 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.159 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.159 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.159 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.159 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.160 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.160 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.160 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.160 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.160 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.160 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.160 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.160 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.160 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.160 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.160 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.160 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.160 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.160 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.160 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.160 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.160 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.160 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.160 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.160 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.160 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.160 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.160 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.160 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.160 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.160 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.160 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.160 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.160 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.160 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.160 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.160 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.160 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.160 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.160 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.160 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.160 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.160 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.160 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.160 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.160 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.160 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.160 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.160 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.160 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.160 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.160 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.160 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.160 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.160 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.160 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.160 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.160 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.160 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.160 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.160 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.160 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.160 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.160 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.160 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.160 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.160 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.160 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.160 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.160 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.160 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.160 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.160 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.160 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.160 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.160 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.160 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.160 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.160 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.160 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.160 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.160 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.160 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.160 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.160 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.160 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.160 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.160 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.160 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.160 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.160 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.160 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.160 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.160 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.160 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.160 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.160 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.160 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.160 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.160 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.160 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.160 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.160 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.160 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.160 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.160 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.160 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:00.160 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:00.160 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:00.160 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:00.160 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:00.160 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:00.160 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:00.160 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:00.160 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:00.160 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:00.160 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:00.160 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:00.160 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:00.161 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.161 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 8008692 kB' 'MemAvailable: 9511692 kB' 'Buffers: 2436 kB' 'Cached: 1714544 kB' 'SwapCached: 0 kB' 'Active: 493096 kB' 'Inactive: 1344316 kB' 'Active(anon): 130896 kB' 'Inactive(anon): 0 kB' 'Active(file): 362200 kB' 'Inactive(file): 1344316 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 212 kB' 'Writeback: 0 kB' 'AnonPages: 122304 kB' 'Mapped: 48604 kB' 'Shmem: 10464 kB' 'KReclaimable: 66844 kB' 'Slab: 140040 kB' 'SReclaimable: 66844 kB' 'SUnreclaim: 73196 kB' 'KernelStack: 6320 kB' 'PageTables: 4192 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 359368 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54676 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 149356 kB' 'DirectMap2M: 5093376 kB' 'DirectMap1G: 9437184 kB' 00:04:00.161 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.161 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.161 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.161 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.161 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.161 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.161 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.161 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.161 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.161 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.161 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.161 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.161 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.161 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.161 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.161 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.161 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.161 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.161 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.161 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.161 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.161 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.161 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.161 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.161 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.161 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.161 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.161 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.161 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.161 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.161 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.161 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.161 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.161 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.161 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.161 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.161 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.161 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.161 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.161 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.161 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.161 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.161 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.161 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.161 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.161 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.161 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.161 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.161 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.161 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.161 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.161 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.161 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.161 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.161 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.161 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.161 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.161 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.161 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.161 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.161 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.161 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.161 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.161 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.161 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.161 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.161 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.161 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.161 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.161 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.161 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.161 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.161 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.161 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.161 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.161 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.161 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.161 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.161 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.161 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.161 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.161 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.161 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.161 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.161 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.161 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.161 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.161 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.162 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.162 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.162 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.162 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.162 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.162 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.162 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.162 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.162 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.162 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.162 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.162 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.162 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.162 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.162 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.162 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.162 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.162 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.162 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.162 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.162 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.162 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.162 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.162 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.162 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.162 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.162 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.162 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.162 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.162 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.162 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.162 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.162 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.162 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.162 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.162 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.162 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.162 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.162 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.162 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.162 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.162 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.162 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.162 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.162 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.162 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.162 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.162 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.162 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.162 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.162 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.162 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.162 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.162 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.162 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.162 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.162 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.162 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.162 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.162 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.162 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.162 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.162 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.162 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.162 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.162 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.162 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.162 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.162 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.162 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.162 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.162 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.162 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.162 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.162 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.162 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.162 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.162 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.162 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.162 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.162 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.162 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.162 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.162 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.162 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.162 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.162 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.162 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.162 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.162 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.162 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.162 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.162 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.162 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.162 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.162 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.162 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.162 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.162 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.163 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.163 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.163 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.163 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.163 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.163 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.163 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.163 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.163 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.163 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.163 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.163 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.163 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.163 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.163 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.163 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:00.163 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:00.163 nr_hugepages=1024 00:04:00.163 resv_hugepages=0 00:04:00.163 surplus_hugepages=0 00:04:00.163 anon_hugepages=0 00:04:00.163 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:00.163 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:00.163 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:00.163 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:00.163 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:00.163 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:00.163 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:00.163 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:00.163 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:00.163 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:00.163 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:00.163 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:00.163 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:00.163 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:00.163 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:00.163 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:00.163 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:00.163 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.163 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.163 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 8008692 kB' 'MemAvailable: 9511692 kB' 'Buffers: 2436 kB' 'Cached: 1714544 kB' 'SwapCached: 0 kB' 'Active: 493352 kB' 'Inactive: 1344316 kB' 'Active(anon): 131152 kB' 'Inactive(anon): 0 kB' 'Active(file): 362200 kB' 'Inactive(file): 1344316 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 212 kB' 'Writeback: 0 kB' 'AnonPages: 122264 kB' 'Mapped: 48604 kB' 'Shmem: 10464 kB' 'KReclaimable: 66844 kB' 'Slab: 140040 kB' 'SReclaimable: 66844 kB' 'SUnreclaim: 73196 kB' 'KernelStack: 6304 kB' 'PageTables: 4140 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 359368 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54660 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 149356 kB' 'DirectMap2M: 5093376 kB' 'DirectMap1G: 9437184 kB' 00:04:00.163 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.163 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.163 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.163 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.163 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.163 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.163 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.163 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.163 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.163 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.163 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.163 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.163 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.163 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.163 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.163 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.163 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.163 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.163 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.163 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.163 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.163 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.163 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.163 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.163 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.163 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.163 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.163 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.163 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.163 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.163 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.163 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.163 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.163 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.163 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.163 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.163 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.163 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.163 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.163 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.163 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.163 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.163 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.163 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.163 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.163 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.163 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.163 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.163 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.163 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.163 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.163 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.163 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.163 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.163 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.163 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.163 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.163 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.163 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.163 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.163 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.163 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.163 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.163 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.163 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.164 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.164 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.164 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.164 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.164 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.164 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.164 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.164 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.164 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.164 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.164 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.164 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.164 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.164 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.164 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.164 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.164 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.164 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.164 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.164 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.164 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.164 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.164 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.164 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.164 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.164 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.164 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.164 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.164 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.164 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.164 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.164 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.164 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.164 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.164 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.164 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.164 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.164 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.164 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.164 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.164 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.164 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.164 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.164 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.164 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.164 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.164 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.164 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.164 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.164 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.164 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.164 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.164 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.164 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.164 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.164 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.164 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.164 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.164 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.164 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.164 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.164 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.164 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.164 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.164 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.164 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.164 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.164 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.164 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.164 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.164 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.164 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.164 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.164 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.164 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.164 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.164 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.164 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.164 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.164 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.164 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.164 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.164 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.164 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.164 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.164 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.164 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.164 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.164 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.164 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.164 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.164 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.164 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.164 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.164 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.164 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.164 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.164 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.164 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.164 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.164 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.164 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.164 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.164 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.164 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.164 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.164 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.164 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.164 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.164 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.164 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.164 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.165 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.165 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.165 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.165 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.165 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.165 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.165 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.165 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.165 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.165 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.165 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.165 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.165 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.165 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.165 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.165 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.165 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 1024 00:04:00.165 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:00.165 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:00.165 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:00.165 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@27 -- # local node 00:04:00.165 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:00.165 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:00.165 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:00.165 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:00.165 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:00.165 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:00.424 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:00.424 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:00.424 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=0 00:04:00.424 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:00.424 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:00.424 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:00.424 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:00.424 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:00.424 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:00.424 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:00.424 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.424 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 8008692 kB' 'MemUsed: 4233280 kB' 'SwapCached: 0 kB' 'Active: 493092 kB' 'Inactive: 1344316 kB' 'Active(anon): 130892 kB' 'Inactive(anon): 0 kB' 'Active(file): 362200 kB' 'Inactive(file): 1344316 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 212 kB' 'Writeback: 0 kB' 'FilePages: 1716980 kB' 'Mapped: 48604 kB' 'AnonPages: 122296 kB' 'Shmem: 10464 kB' 'KernelStack: 6320 kB' 'PageTables: 4192 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 66844 kB' 'Slab: 140032 kB' 'SReclaimable: 66844 kB' 'SUnreclaim: 73188 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:00.424 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.424 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.424 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.424 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.424 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.424 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.424 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.424 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.424 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.424 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.424 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.424 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.424 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.424 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.424 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.424 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.424 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.425 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.425 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.425 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.425 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.425 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.425 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.425 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.425 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.425 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.425 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.425 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.425 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.425 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.425 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.425 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.425 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.425 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.425 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.425 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.425 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.425 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.425 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.425 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.425 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.425 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.425 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.425 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.425 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.425 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.425 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.425 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.425 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.425 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.425 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.425 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.425 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.425 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.425 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.425 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.425 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.425 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.425 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.425 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.425 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.425 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.425 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.425 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.425 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.425 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.425 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.425 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.425 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.425 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.425 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.425 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.425 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.425 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.425 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.425 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.425 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.425 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.425 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.425 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.425 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.425 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.425 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.425 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.425 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.425 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.425 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.425 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.425 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.425 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.425 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.425 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.425 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.425 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.425 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.425 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.425 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.425 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.425 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.425 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.425 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.425 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.425 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.425 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.425 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.425 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.425 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.425 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.425 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.425 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.425 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.425 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.425 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.425 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.425 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.425 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.425 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.425 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.425 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.425 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.425 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.425 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.425 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.425 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.425 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.425 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.425 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.425 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.425 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.425 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.425 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.425 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.425 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.425 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.425 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.425 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.425 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.425 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.425 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.425 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.425 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.425 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.425 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.425 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.426 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.426 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.426 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:00.426 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:00.426 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:00.426 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:00.426 node0=1024 expecting 1024 00:04:00.426 ************************************ 00:04:00.426 END TEST even_2G_alloc 00:04:00.426 ************************************ 00:04:00.426 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:00.426 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:00.426 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:00.426 11:22:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:00.426 00:04:00.426 real 0m0.565s 00:04:00.426 user 0m0.262s 00:04:00.426 sys 0m0.290s 00:04:00.426 11:22:37 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:00.426 11:22:37 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:00.426 11:22:37 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:04:00.426 11:22:37 setup.sh.hugepages -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:04:00.426 11:22:37 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:00.426 11:22:37 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:00.426 11:22:37 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:00.426 ************************************ 00:04:00.426 START TEST odd_alloc 00:04:00.426 ************************************ 00:04:00.426 11:22:37 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1123 -- # odd_alloc 00:04:00.426 11:22:37 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:04:00.426 11:22:37 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@49 -- # local size=2098176 00:04:00.426 11:22:37 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:00.426 11:22:37 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:00.426 11:22:37 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:04:00.426 11:22:37 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:00.426 11:22:37 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:00.426 11:22:37 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:00.426 11:22:37 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:04:00.426 11:22:37 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:00.426 11:22:37 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:00.426 11:22:37 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:00.426 11:22:37 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:00.426 11:22:37 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:00.426 11:22:37 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:00.426 11:22:37 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1025 00:04:00.426 11:22:37 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 0 00:04:00.426 11:22:37 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 0 00:04:00.426 11:22:37 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:00.426 11:22:37 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:04:00.426 11:22:37 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:04:00.426 11:22:37 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # setup output 00:04:00.426 11:22:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:00.426 11:22:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:00.687 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:00.687 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:00.687 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:00.687 11:22:38 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:04:00.687 11:22:38 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@89 -- # local node 00:04:00.687 11:22:38 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:00.687 11:22:38 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:00.687 11:22:38 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:00.687 11:22:38 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:00.687 11:22:38 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:00.687 11:22:38 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:00.687 11:22:38 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:00.687 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:00.687 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:00.687 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:00.687 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:00.687 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:00.687 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:00.687 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:00.687 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:00.687 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:00.687 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.687 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 8007088 kB' 'MemAvailable: 9510088 kB' 'Buffers: 2436 kB' 'Cached: 1714544 kB' 'SwapCached: 0 kB' 'Active: 493780 kB' 'Inactive: 1344316 kB' 'Active(anon): 131580 kB' 'Inactive(anon): 0 kB' 'Active(file): 362200 kB' 'Inactive(file): 1344316 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 220 kB' 'Writeback: 0 kB' 'AnonPages: 122728 kB' 'Mapped: 48756 kB' 'Shmem: 10464 kB' 'KReclaimable: 66844 kB' 'Slab: 140044 kB' 'SReclaimable: 66844 kB' 'SUnreclaim: 73200 kB' 'KernelStack: 6340 kB' 'PageTables: 4232 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459988 kB' 'Committed_AS: 359368 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54676 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 149356 kB' 'DirectMap2M: 5093376 kB' 'DirectMap1G: 9437184 kB' 00:04:00.688 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.688 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.688 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.688 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.688 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.688 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.688 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.688 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.688 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.688 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.688 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.688 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.688 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.688 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.688 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.688 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.688 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.688 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.688 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.688 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.688 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.688 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.688 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.688 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.688 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.688 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.688 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.688 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.688 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.688 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.688 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.688 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.688 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.688 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.688 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.688 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.688 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.688 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.688 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.688 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.688 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.688 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.688 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.688 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.688 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.688 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.688 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.688 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.688 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.688 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.688 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.688 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.688 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.688 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.688 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.688 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.688 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.688 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.688 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.688 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.688 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.688 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.688 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.688 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.688 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.688 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.688 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.688 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.688 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.688 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.688 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.688 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.688 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.688 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.688 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.688 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.688 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.688 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.688 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.688 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.688 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.688 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.688 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.688 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.688 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.688 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.688 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.688 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.688 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.688 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.688 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.688 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.688 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.688 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.688 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.688 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.688 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.688 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.688 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.688 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.688 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.688 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.688 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.688 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.688 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.688 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.688 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.688 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.688 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.688 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.688 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.688 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.688 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.688 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.688 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.688 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.688 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.688 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.688 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.688 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.688 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.688 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.688 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.688 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.688 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.688 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.688 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.688 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.688 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.688 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.688 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.688 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.688 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.688 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.688 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.688 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.689 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.689 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.689 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.689 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.689 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.689 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.689 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.689 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.689 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.689 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.689 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.689 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.689 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.689 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.689 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.689 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.689 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.689 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.689 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.689 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.689 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.689 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.689 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.689 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.689 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.689 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.689 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:00.689 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:00.689 11:22:38 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:00.689 11:22:38 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:00.689 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:00.689 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:00.689 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:00.689 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:00.689 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:00.689 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:00.689 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:00.689 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:00.689 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:00.689 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.689 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.689 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 8007340 kB' 'MemAvailable: 9510340 kB' 'Buffers: 2436 kB' 'Cached: 1714544 kB' 'SwapCached: 0 kB' 'Active: 493448 kB' 'Inactive: 1344316 kB' 'Active(anon): 131248 kB' 'Inactive(anon): 0 kB' 'Active(file): 362200 kB' 'Inactive(file): 1344316 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 224 kB' 'Writeback: 0 kB' 'AnonPages: 122492 kB' 'Mapped: 48608 kB' 'Shmem: 10464 kB' 'KReclaimable: 66844 kB' 'Slab: 140076 kB' 'SReclaimable: 66844 kB' 'SUnreclaim: 73232 kB' 'KernelStack: 6336 kB' 'PageTables: 4156 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459988 kB' 'Committed_AS: 359368 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54644 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 149356 kB' 'DirectMap2M: 5093376 kB' 'DirectMap1G: 9437184 kB' 00:04:00.689 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.689 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.689 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.689 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.689 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.689 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.689 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.689 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.689 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.689 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.689 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.689 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.689 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.689 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.689 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.689 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.689 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.689 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.689 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.689 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.689 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.689 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.689 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.689 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.689 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.689 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.689 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.689 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.689 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.689 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.689 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.689 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.689 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.689 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.689 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.689 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.689 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.689 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.689 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.689 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.689 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.689 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.689 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.689 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.689 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.689 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.689 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.689 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.689 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.689 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.689 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.689 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.689 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.689 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.689 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.689 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.689 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.689 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.689 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.689 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.689 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.689 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.689 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.689 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.689 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.689 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.689 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.689 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.689 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.689 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.689 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.689 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.689 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.689 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.689 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.689 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.689 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.690 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.690 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.690 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.690 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.690 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.690 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.690 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.690 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.690 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.690 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.690 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.690 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.690 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.690 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.690 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.690 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.690 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.690 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.690 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.690 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.690 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.690 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.690 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.690 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.690 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.690 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.690 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.690 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.690 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.690 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.690 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.690 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.690 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.690 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.690 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.690 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.690 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.690 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.690 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.690 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.690 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.690 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.690 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.690 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.690 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.690 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.690 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.690 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.690 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.690 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.690 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.690 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.690 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.690 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.690 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.690 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.690 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.690 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.690 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.690 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.690 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.690 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.690 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.690 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.690 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.690 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.690 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.690 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.690 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.690 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.690 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.690 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.690 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.690 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.690 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.690 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.690 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.690 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.690 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.690 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.690 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.690 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.690 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.690 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.690 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.690 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.690 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.690 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.690 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.690 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.690 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.690 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.690 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.690 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.690 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.690 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.690 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.690 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.690 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.690 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.690 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.690 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.690 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.690 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.690 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.690 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.690 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.690 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.690 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.690 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.690 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.690 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.690 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.690 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.690 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.690 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.690 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.690 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.690 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.690 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.690 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.690 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.690 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.690 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.690 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.690 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.690 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.690 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.690 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:00.690 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:00.690 11:22:38 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:00.690 11:22:38 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:00.690 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:00.690 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:00.690 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:00.691 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:00.691 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:00.691 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:00.691 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:00.691 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:00.691 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:00.691 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.691 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 8007340 kB' 'MemAvailable: 9510340 kB' 'Buffers: 2436 kB' 'Cached: 1714544 kB' 'SwapCached: 0 kB' 'Active: 493116 kB' 'Inactive: 1344316 kB' 'Active(anon): 130916 kB' 'Inactive(anon): 0 kB' 'Active(file): 362200 kB' 'Inactive(file): 1344316 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 224 kB' 'Writeback: 0 kB' 'AnonPages: 122292 kB' 'Mapped: 48608 kB' 'Shmem: 10464 kB' 'KReclaimable: 66844 kB' 'Slab: 140100 kB' 'SReclaimable: 66844 kB' 'SUnreclaim: 73256 kB' 'KernelStack: 6288 kB' 'PageTables: 4096 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459988 kB' 'Committed_AS: 359368 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54644 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 149356 kB' 'DirectMap2M: 5093376 kB' 'DirectMap1G: 9437184 kB' 00:04:00.691 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.691 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.691 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.691 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.691 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.691 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.691 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.691 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.691 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.691 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.691 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.691 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.691 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.691 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.691 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.691 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.691 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.691 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.691 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.691 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.691 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.691 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.691 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.691 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.691 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.691 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.691 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.691 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.691 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.691 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.691 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.691 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.691 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.691 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.691 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.691 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.691 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.691 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.691 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.691 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.691 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.691 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.691 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.691 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.691 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.691 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.691 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.691 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.691 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.691 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.691 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.691 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.691 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.691 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.691 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.691 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.691 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.691 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.691 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.691 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.691 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.691 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.691 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.691 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.691 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.691 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.691 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.691 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.691 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.691 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.691 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.691 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.691 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.691 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.691 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.691 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.691 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.691 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.691 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.691 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.691 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.691 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.691 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.691 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.691 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.691 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.691 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.691 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.691 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.691 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.691 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.691 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.691 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.691 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.691 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.691 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.691 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.691 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.691 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.691 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.691 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.691 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.691 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.691 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.691 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.691 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.692 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.692 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.692 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.692 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.692 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.692 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.692 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.692 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.692 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.692 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.692 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.692 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.692 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.692 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.692 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.692 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.692 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.692 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.692 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.692 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.692 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.692 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.692 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.692 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.692 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.692 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.692 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.692 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.692 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.692 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.692 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.692 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.692 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.692 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.692 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.692 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.692 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.692 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.692 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.692 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.692 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.692 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.692 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.692 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.692 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.692 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.692 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.692 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.692 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.692 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.692 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.952 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.952 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.952 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.952 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.952 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.952 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.952 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.952 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.952 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.952 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.952 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.952 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.952 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.952 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.952 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.952 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.952 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.952 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.952 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.952 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.952 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.952 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.952 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.952 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.952 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.952 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.952 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.952 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.952 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.952 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.952 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.952 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.952 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.952 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.952 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.952 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.952 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.952 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.952 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.952 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.952 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.952 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.952 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.952 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.952 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.952 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:00.952 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:00.952 11:22:38 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:00.952 nr_hugepages=1025 00:04:00.952 11:22:38 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:04:00.952 resv_hugepages=0 00:04:00.952 11:22:38 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:00.952 surplus_hugepages=0 00:04:00.952 11:22:38 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:00.952 anon_hugepages=0 00:04:00.952 11:22:38 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:00.952 11:22:38 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:04:00.952 11:22:38 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:04:00.952 11:22:38 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:00.952 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:00.952 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:00.952 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:00.952 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:00.952 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:00.952 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:00.952 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:00.952 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:00.952 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:00.952 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.952 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.952 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 8007632 kB' 'MemAvailable: 9510632 kB' 'Buffers: 2436 kB' 'Cached: 1714544 kB' 'SwapCached: 0 kB' 'Active: 493780 kB' 'Inactive: 1344316 kB' 'Active(anon): 131580 kB' 'Inactive(anon): 0 kB' 'Active(file): 362200 kB' 'Inactive(file): 1344316 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 224 kB' 'Writeback: 0 kB' 'AnonPages: 122792 kB' 'Mapped: 48868 kB' 'Shmem: 10464 kB' 'KReclaimable: 66844 kB' 'Slab: 140092 kB' 'SReclaimable: 66844 kB' 'SUnreclaim: 73248 kB' 'KernelStack: 6352 kB' 'PageTables: 4304 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459988 kB' 'Committed_AS: 362112 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54644 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 149356 kB' 'DirectMap2M: 5093376 kB' 'DirectMap1G: 9437184 kB' 00:04:00.952 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.952 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.952 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.952 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.952 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.952 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.952 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.952 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.952 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.952 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.952 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.952 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.952 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.952 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.952 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.952 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.952 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.953 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.953 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.953 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.953 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.953 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.953 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.953 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.953 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.953 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.953 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.953 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.953 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.953 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.953 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.953 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.953 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.953 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.953 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.953 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.953 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.953 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.953 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.953 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.953 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.953 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.953 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.953 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.953 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.953 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.953 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.953 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.953 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.953 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.953 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.953 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.953 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.953 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.953 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.953 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.953 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.953 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.953 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.953 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.953 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.953 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.953 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.953 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.953 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.953 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.953 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.953 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.953 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.953 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.953 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.953 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.953 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.953 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.953 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.953 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.953 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.953 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.953 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.953 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.953 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.953 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.953 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.953 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.953 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.953 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.953 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.953 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.953 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.953 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.953 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.953 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.953 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.953 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.953 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.953 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.953 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.953 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.953 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.953 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.953 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.953 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.953 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.953 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.953 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.953 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.953 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.953 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.953 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.953 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.953 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.953 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.953 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.953 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.953 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.953 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.953 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.953 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.953 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.953 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.953 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.953 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.953 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.953 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.953 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.953 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.953 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.953 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.953 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.953 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.953 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.953 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.953 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.953 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.953 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.953 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.953 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.953 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.953 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.953 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.953 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.953 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.953 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.954 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.954 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.954 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.954 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.954 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.954 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.954 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.954 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.954 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.954 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.954 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.954 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.954 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.954 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.954 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.954 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.954 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.954 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.954 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.954 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.954 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.954 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.954 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.954 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.954 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.954 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.954 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.954 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.954 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.954 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.954 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.954 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.954 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.954 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.954 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.954 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.954 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.954 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.954 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.954 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.954 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.954 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.954 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.954 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.954 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.954 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.954 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.954 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.954 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.954 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.954 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 1025 00:04:00.954 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:00.954 11:22:38 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:04:00.954 11:22:38 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:00.954 11:22:38 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@27 -- # local node 00:04:00.954 11:22:38 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:00.954 11:22:38 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1025 00:04:00.954 11:22:38 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:00.954 11:22:38 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:00.954 11:22:38 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:00.954 11:22:38 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:00.954 11:22:38 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:00.954 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:00.954 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=0 00:04:00.954 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:00.954 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:00.954 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:00.954 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:00.954 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:00.954 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:00.954 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:00.954 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 8007632 kB' 'MemUsed: 4234340 kB' 'SwapCached: 0 kB' 'Active: 493308 kB' 'Inactive: 1344316 kB' 'Active(anon): 131108 kB' 'Inactive(anon): 0 kB' 'Active(file): 362200 kB' 'Inactive(file): 1344316 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 224 kB' 'Writeback: 0 kB' 'FilePages: 1716980 kB' 'Mapped: 48668 kB' 'AnonPages: 122360 kB' 'Shmem: 10464 kB' 'KernelStack: 6336 kB' 'PageTables: 4252 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 66844 kB' 'Slab: 140092 kB' 'SReclaimable: 66844 kB' 'SUnreclaim: 73248 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Surp: 0' 00:04:00.954 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.954 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.954 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.954 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.954 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.954 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.954 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.954 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.954 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.954 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.954 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.954 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.954 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.954 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.954 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.954 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.954 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.954 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.954 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.954 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.954 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.954 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.954 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.954 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.954 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.954 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.954 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.954 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.954 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.954 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.954 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.954 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.954 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.954 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.954 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.954 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.954 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.954 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.954 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.954 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.954 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.954 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.954 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.954 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.954 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.954 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.955 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.955 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.955 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.955 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.955 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.955 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.955 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.955 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.955 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.955 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.955 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.955 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.955 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.955 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.955 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.955 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.955 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.955 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.955 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.955 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.955 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.955 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.955 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.955 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.955 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.955 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.955 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.955 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.955 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.955 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.955 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.955 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.955 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.955 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.955 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.955 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.955 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.955 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.955 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.955 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.955 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.955 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.955 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.955 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.955 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.955 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.955 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.955 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.955 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.955 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.955 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.955 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.955 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.955 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.955 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.955 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.955 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.955 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.955 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.955 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.955 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.955 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.955 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.955 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.955 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.955 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.955 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.955 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.955 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.955 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.955 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.955 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.955 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.955 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.955 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.955 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.955 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.955 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.955 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.955 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.955 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.955 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.955 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.955 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.955 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.955 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.955 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.955 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.955 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.955 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.955 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.955 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.955 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.955 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.955 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.955 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.955 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.955 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.955 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.955 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.955 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.955 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:00.955 11:22:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:00.955 11:22:38 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:00.955 11:22:38 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:00.955 11:22:38 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:00.955 11:22:38 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:00.955 node0=1025 expecting 1025 00:04:00.955 11:22:38 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1025 expecting 1025' 00:04:00.955 11:22:38 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@130 -- # [[ 1025 == \1\0\2\5 ]] 00:04:00.955 00:04:00.955 real 0m0.510s 00:04:00.955 user 0m0.267s 00:04:00.955 sys 0m0.256s 00:04:00.955 11:22:38 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:00.955 11:22:38 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:00.955 ************************************ 00:04:00.955 END TEST odd_alloc 00:04:00.955 ************************************ 00:04:00.955 11:22:38 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:04:00.955 11:22:38 setup.sh.hugepages -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:04:00.955 11:22:38 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:00.955 11:22:38 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:00.955 11:22:38 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:00.955 ************************************ 00:04:00.955 START TEST custom_alloc 00:04:00.955 ************************************ 00:04:00.956 11:22:38 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1123 -- # custom_alloc 00:04:00.956 11:22:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@167 -- # local IFS=, 00:04:00.956 11:22:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@169 -- # local node 00:04:00.956 11:22:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # nodes_hp=() 00:04:00.956 11:22:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # local nodes_hp 00:04:00.956 11:22:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:04:00.956 11:22:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:04:00.956 11:22:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:04:00.956 11:22:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:00.956 11:22:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:00.956 11:22:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:04:00.956 11:22:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:00.956 11:22:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:00.956 11:22:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:00.956 11:22:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:04:00.956 11:22:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:00.956 11:22:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:00.956 11:22:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:00.956 11:22:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:00.956 11:22:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:00.956 11:22:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:00.956 11:22:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:04:00.956 11:22:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 0 00:04:00.956 11:22:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 0 00:04:00.956 11:22:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:00.956 11:22:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:04:00.956 11:22:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@176 -- # (( 1 > 1 )) 00:04:00.956 11:22:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:04:00.956 11:22:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:04:00.956 11:22:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:04:00.956 11:22:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:04:00.956 11:22:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:00.956 11:22:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:00.956 11:22:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:04:00.956 11:22:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:00.956 11:22:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:00.956 11:22:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:00.956 11:22:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:00.956 11:22:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:04:00.956 11:22:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:04:00.956 11:22:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:04:00.956 11:22:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:04:00.956 11:22:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512' 00:04:00.956 11:22:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # setup output 00:04:00.956 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:00.956 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:01.217 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:01.217 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:01.217 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:01.217 11:22:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # nr_hugepages=512 00:04:01.217 11:22:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:04:01.217 11:22:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@89 -- # local node 00:04:01.217 11:22:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:01.217 11:22:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:01.217 11:22:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:01.217 11:22:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:01.217 11:22:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:01.217 11:22:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:01.217 11:22:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:01.217 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:01.217 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:01.217 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:01.217 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:01.217 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:01.217 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:01.217 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:01.217 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:01.217 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:01.217 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.217 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.217 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 9059140 kB' 'MemAvailable: 10562140 kB' 'Buffers: 2436 kB' 'Cached: 1714544 kB' 'SwapCached: 0 kB' 'Active: 493456 kB' 'Inactive: 1344316 kB' 'Active(anon): 131256 kB' 'Inactive(anon): 0 kB' 'Active(file): 362200 kB' 'Inactive(file): 1344316 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 232 kB' 'Writeback: 0 kB' 'AnonPages: 122328 kB' 'Mapped: 48736 kB' 'Shmem: 10464 kB' 'KReclaimable: 66844 kB' 'Slab: 140100 kB' 'SReclaimable: 66844 kB' 'SUnreclaim: 73256 kB' 'KernelStack: 6244 kB' 'PageTables: 4092 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985300 kB' 'Committed_AS: 359368 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54676 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 149356 kB' 'DirectMap2M: 5093376 kB' 'DirectMap1G: 9437184 kB' 00:04:01.217 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.217 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.217 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.217 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.217 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.217 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.217 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.217 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.217 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.217 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.217 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.217 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.217 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.217 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.217 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.217 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.217 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.217 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.217 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.217 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.217 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.217 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.217 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.217 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.217 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.217 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.217 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.217 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.217 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.217 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.217 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.217 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.217 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.217 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.217 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.218 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.218 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.218 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.218 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.218 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.218 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.218 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.218 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.218 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.218 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.218 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.218 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.218 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.218 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.218 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.218 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.218 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.218 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.218 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.218 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.218 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.218 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.218 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.218 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.218 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.218 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.218 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.218 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.218 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.218 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.218 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.218 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.218 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.218 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.218 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.218 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.218 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.218 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.218 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.218 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.218 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.218 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.218 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.218 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.218 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.218 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.218 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.218 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.218 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.218 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.218 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.218 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.218 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.218 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.218 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.218 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.218 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.218 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.218 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.218 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.218 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.218 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.218 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.218 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.218 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.218 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.218 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.218 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.218 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.218 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.218 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.218 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.218 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.218 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.218 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.218 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.218 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.218 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.218 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.218 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.218 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.218 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.218 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.218 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.218 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.218 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.218 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.218 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.218 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.218 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.218 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.218 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.218 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.218 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.218 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.218 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.218 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.218 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.218 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.218 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.218 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.218 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.218 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.218 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.218 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.218 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.218 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.218 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.218 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.218 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.218 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.218 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.218 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.218 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.218 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.218 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.218 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.218 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.218 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.218 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.218 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.218 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.218 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.218 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.218 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.218 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.218 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:01.218 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:01.218 11:22:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:01.218 11:22:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:01.218 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:01.218 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:01.219 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:01.219 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:01.219 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:01.219 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:01.219 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:01.219 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:01.219 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:01.219 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.219 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.219 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 9059140 kB' 'MemAvailable: 10562140 kB' 'Buffers: 2436 kB' 'Cached: 1714544 kB' 'SwapCached: 0 kB' 'Active: 493132 kB' 'Inactive: 1344316 kB' 'Active(anon): 130932 kB' 'Inactive(anon): 0 kB' 'Active(file): 362200 kB' 'Inactive(file): 1344316 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 232 kB' 'Writeback: 0 kB' 'AnonPages: 122340 kB' 'Mapped: 48608 kB' 'Shmem: 10464 kB' 'KReclaimable: 66844 kB' 'Slab: 140104 kB' 'SReclaimable: 66844 kB' 'SUnreclaim: 73260 kB' 'KernelStack: 6320 kB' 'PageTables: 4204 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985300 kB' 'Committed_AS: 359368 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54644 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 149356 kB' 'DirectMap2M: 5093376 kB' 'DirectMap1G: 9437184 kB' 00:04:01.219 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.219 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.219 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.219 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.219 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.219 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.219 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.219 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.219 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.219 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.219 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.219 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.219 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.219 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.219 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.219 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.219 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.219 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.219 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.219 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.219 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.219 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.219 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.219 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.219 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.219 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.219 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.219 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.219 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.219 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.219 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.219 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.219 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.219 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.219 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.219 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.219 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.219 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.219 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.219 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.219 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.219 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.219 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.219 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.219 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.219 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.219 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.219 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.219 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.219 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.219 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.219 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.219 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.219 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.219 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.219 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.219 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.219 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.219 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.219 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.219 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.219 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.219 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.219 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.219 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.219 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.219 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.219 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.219 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.219 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.219 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.219 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.219 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.219 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.219 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.219 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.219 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.219 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.219 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.219 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.219 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.219 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.219 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.219 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.219 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.219 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.219 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.219 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.219 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.219 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.219 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.219 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.219 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.219 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.219 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.219 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.219 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.219 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.219 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.219 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.219 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.219 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.219 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.219 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.219 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.219 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.219 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.219 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.219 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.219 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.220 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.220 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.220 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.220 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.220 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.220 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.220 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.220 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.220 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.220 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.220 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.220 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.220 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.220 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.220 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.220 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.220 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.220 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.220 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.220 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.220 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.220 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.220 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.220 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.220 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.220 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.220 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.220 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.220 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.220 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.220 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.220 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.220 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.220 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.220 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.220 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.220 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.220 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.220 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.220 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.220 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.220 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.220 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.220 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.220 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.220 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.220 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.220 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.220 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.220 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.220 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.220 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.220 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.220 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.220 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.220 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.220 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.220 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.220 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.220 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.220 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.220 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.220 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.220 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.220 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.220 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.220 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.220 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.220 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.220 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.220 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.220 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.220 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.220 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.220 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.220 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.220 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.220 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.220 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.220 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.220 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.220 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.220 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.220 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.220 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.220 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.220 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.220 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.220 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.220 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.220 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.220 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.220 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.220 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.220 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.220 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:01.220 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:01.220 11:22:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:01.220 11:22:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:01.220 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:01.220 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:01.220 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:01.220 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:01.220 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:01.220 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:01.220 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:01.220 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:01.220 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:01.220 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.220 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.220 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 9059140 kB' 'MemAvailable: 10562140 kB' 'Buffers: 2436 kB' 'Cached: 1714544 kB' 'SwapCached: 0 kB' 'Active: 493100 kB' 'Inactive: 1344316 kB' 'Active(anon): 130900 kB' 'Inactive(anon): 0 kB' 'Active(file): 362200 kB' 'Inactive(file): 1344316 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 232 kB' 'Writeback: 0 kB' 'AnonPages: 122268 kB' 'Mapped: 48608 kB' 'Shmem: 10464 kB' 'KReclaimable: 66844 kB' 'Slab: 140100 kB' 'SReclaimable: 66844 kB' 'SUnreclaim: 73256 kB' 'KernelStack: 6304 kB' 'PageTables: 4152 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985300 kB' 'Committed_AS: 359368 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54644 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 149356 kB' 'DirectMap2M: 5093376 kB' 'DirectMap1G: 9437184 kB' 00:04:01.220 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.220 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.220 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.220 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.220 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.220 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.220 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.220 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.221 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.221 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.221 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.221 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.221 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.221 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.221 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.221 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.221 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.221 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.221 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.221 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.221 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.221 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.221 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.221 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.221 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.221 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.221 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.221 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.221 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.221 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.221 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.221 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.221 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.221 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.221 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.221 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.221 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.221 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.221 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.221 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.221 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.221 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.221 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.221 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.221 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.221 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.221 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.221 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.221 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.221 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.221 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.221 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.221 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.221 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.221 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.221 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.221 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.221 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.221 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.221 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.221 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.221 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.221 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.221 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.221 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.221 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.221 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.221 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.221 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.221 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.221 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.221 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.221 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.221 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.221 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.221 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.221 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.221 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.221 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.221 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.221 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.221 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.221 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.221 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.221 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.221 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.221 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.221 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.221 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.221 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.221 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.221 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.221 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.221 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.221 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.221 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.221 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.221 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.221 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.221 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.221 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.221 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.221 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.221 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.221 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.221 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.221 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.221 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.221 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.221 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.221 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.221 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.221 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.222 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.222 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.222 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.222 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.222 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.222 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.222 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.222 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.222 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.222 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.222 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.222 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.222 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.222 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.222 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.222 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.222 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.222 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.222 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.222 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.222 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.222 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.222 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.222 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.222 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.222 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.222 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.222 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.222 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.222 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.222 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.222 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.222 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.222 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.222 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.222 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.222 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.222 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.222 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.222 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.222 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.222 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.222 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.222 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.222 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.222 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.222 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.222 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.222 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.222 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.222 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.482 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.482 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.482 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.482 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.482 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.482 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.482 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.482 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.482 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.482 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.482 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.482 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.482 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.482 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.482 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.482 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.482 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.482 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.482 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.482 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.482 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.482 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.482 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.482 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.482 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.482 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.482 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.483 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.483 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.483 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.483 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.483 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.483 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.483 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.483 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.483 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.483 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.483 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:01.483 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:01.483 11:22:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:01.483 nr_hugepages=512 00:04:01.483 11:22:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 00:04:01.483 resv_hugepages=0 00:04:01.483 11:22:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:01.483 surplus_hugepages=0 00:04:01.483 11:22:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:01.483 anon_hugepages=0 00:04:01.483 11:22:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:01.483 11:22:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 00:04:01.483 11:22:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 00:04:01.483 11:22:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:01.483 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:01.483 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:01.483 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:01.483 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:01.483 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:01.483 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:01.483 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:01.483 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:01.483 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:01.483 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.483 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.483 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 9059140 kB' 'MemAvailable: 10562140 kB' 'Buffers: 2436 kB' 'Cached: 1714544 kB' 'SwapCached: 0 kB' 'Active: 493052 kB' 'Inactive: 1344316 kB' 'Active(anon): 130852 kB' 'Inactive(anon): 0 kB' 'Active(file): 362200 kB' 'Inactive(file): 1344316 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 232 kB' 'Writeback: 0 kB' 'AnonPages: 122220 kB' 'Mapped: 48608 kB' 'Shmem: 10464 kB' 'KReclaimable: 66844 kB' 'Slab: 140100 kB' 'SReclaimable: 66844 kB' 'SUnreclaim: 73256 kB' 'KernelStack: 6288 kB' 'PageTables: 4100 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985300 kB' 'Committed_AS: 359368 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54660 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 149356 kB' 'DirectMap2M: 5093376 kB' 'DirectMap1G: 9437184 kB' 00:04:01.483 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.483 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.483 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.483 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.483 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.483 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.483 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.483 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.483 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.483 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.483 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.483 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.483 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.483 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.483 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.483 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.483 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.483 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.483 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.483 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.483 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.483 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.483 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.483 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.483 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.483 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.483 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.483 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.483 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.483 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.483 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.483 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.483 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.483 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.483 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.483 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.483 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.483 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.483 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.483 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.483 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.483 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.483 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.483 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.483 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.483 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.483 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.483 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.483 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.483 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.483 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.483 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.483 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.483 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.483 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.483 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.483 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.483 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.483 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.483 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.483 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.483 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.483 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.483 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.483 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.483 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.483 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.483 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.483 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.483 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.483 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.483 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.483 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.483 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.483 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.483 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.483 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.483 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.483 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.483 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.484 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.484 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.484 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.484 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.484 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.484 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.484 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.484 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.484 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.484 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.484 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.484 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.484 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.484 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.484 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.484 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.484 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.484 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.484 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.484 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.484 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.484 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.484 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.484 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.484 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.484 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.484 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.484 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.484 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.484 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.484 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.484 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.484 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.484 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.484 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.484 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.484 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.484 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.484 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.484 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.484 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.484 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.484 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.484 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.484 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.484 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.484 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.484 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.484 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.484 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.484 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.484 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.484 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.484 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.484 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.484 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.484 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.484 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.484 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.484 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.484 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.484 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.484 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.484 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.484 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.484 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.484 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.484 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.484 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.484 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.484 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.484 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.484 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.484 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.484 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.484 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.484 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.484 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.484 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.484 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.484 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.484 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.484 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.484 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.484 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.484 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.484 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.484 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.484 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.484 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.484 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.484 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.484 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.484 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.484 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.484 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.484 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.484 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.484 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.484 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.484 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.484 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.484 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.484 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.484 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.484 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.484 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.484 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.484 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.484 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.484 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.484 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.484 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.484 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 512 00:04:01.484 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:01.484 11:22:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 00:04:01.484 11:22:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:01.484 11:22:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@27 -- # local node 00:04:01.484 11:22:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:01.484 11:22:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:01.484 11:22:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:01.484 11:22:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:01.484 11:22:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:01.484 11:22:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:01.484 11:22:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:01.484 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:01.484 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=0 00:04:01.484 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:01.484 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:01.484 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:01.484 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:01.485 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:01.485 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:01.485 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:01.485 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.485 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 9058888 kB' 'MemUsed: 3183084 kB' 'SwapCached: 0 kB' 'Active: 493236 kB' 'Inactive: 1344316 kB' 'Active(anon): 131036 kB' 'Inactive(anon): 0 kB' 'Active(file): 362200 kB' 'Inactive(file): 1344316 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 232 kB' 'Writeback: 0 kB' 'FilePages: 1716980 kB' 'Mapped: 48868 kB' 'AnonPages: 121928 kB' 'Shmem: 10464 kB' 'KernelStack: 6272 kB' 'PageTables: 4048 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 66844 kB' 'Slab: 140100 kB' 'SReclaimable: 66844 kB' 'SUnreclaim: 73256 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:01.485 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.485 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.485 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.485 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.485 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.485 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.485 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.485 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.485 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.485 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.485 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.485 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.485 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.485 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.485 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.485 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.485 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.485 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.485 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.485 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.485 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.485 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.485 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.485 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.485 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.485 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.485 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.485 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.485 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.485 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.485 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.485 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.485 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.485 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.485 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.485 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.485 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.485 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.485 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.485 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.485 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.485 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.485 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.485 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.485 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.485 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.485 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.485 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.485 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.485 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.485 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.485 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.485 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.485 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.485 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.485 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.485 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.485 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.485 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.485 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.485 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.485 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.485 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.485 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.485 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.485 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.485 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.485 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.485 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.485 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.485 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.485 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.485 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.485 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.485 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.485 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.485 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.485 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.485 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.485 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.485 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.485 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.485 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.485 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.485 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.485 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.485 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.485 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.485 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.485 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.485 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.485 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.485 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.485 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.485 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.485 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.485 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.485 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.485 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.485 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.485 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.485 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.485 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.485 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.485 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.485 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.485 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.485 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.485 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.485 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.485 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.485 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.485 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.485 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.485 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.485 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.485 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.485 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.485 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.486 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.486 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.486 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.486 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.486 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.486 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.486 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.486 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.486 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.486 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.486 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.486 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.486 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.486 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.486 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.486 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.486 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.486 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.486 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.486 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.486 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.486 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.486 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.486 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.486 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.486 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.486 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.486 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:01.486 11:22:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:01.486 11:22:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:01.486 11:22:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:01.486 11:22:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:01.486 11:22:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:01.486 11:22:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:04:01.486 node0=512 expecting 512 00:04:01.486 11:22:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:04:01.486 00:04:01.486 real 0m0.486s 00:04:01.486 user 0m0.262s 00:04:01.486 sys 0m0.259s 00:04:01.486 11:22:38 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:01.486 11:22:38 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:01.486 ************************************ 00:04:01.486 END TEST custom_alloc 00:04:01.486 ************************************ 00:04:01.486 11:22:38 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:04:01.486 11:22:38 setup.sh.hugepages -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:04:01.486 11:22:38 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:01.486 11:22:38 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:01.486 11:22:38 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:01.486 ************************************ 00:04:01.486 START TEST no_shrink_alloc 00:04:01.486 ************************************ 00:04:01.486 11:22:38 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1123 -- # no_shrink_alloc 00:04:01.486 11:22:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:04:01.486 11:22:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:04:01.486 11:22:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:04:01.486 11:22:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@51 -- # shift 00:04:01.486 11:22:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # node_ids=('0') 00:04:01.486 11:22:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:04:01.486 11:22:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:01.486 11:22:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:01.486 11:22:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:04:01.486 11:22:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:04:01.486 11:22:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:01.486 11:22:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:01.486 11:22:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:01.486 11:22:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:01.486 11:22:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:01.486 11:22:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:04:01.486 11:22:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:01.486 11:22:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:04:01.486 11:22:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@73 -- # return 0 00:04:01.486 11:22:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@198 -- # setup output 00:04:01.486 11:22:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:01.486 11:22:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:01.746 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:01.746 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:01.746 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:01.746 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:04:01.746 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:04:01.746 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:01.746 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:01.746 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:01.746 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:01.746 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:01.746 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:01.746 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:01.746 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:01.746 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:01.746 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:01.746 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:01.746 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:01.746 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:01.746 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:01.746 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:01.746 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:01.746 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.746 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.746 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 8008536 kB' 'MemAvailable: 9511536 kB' 'Buffers: 2436 kB' 'Cached: 1714544 kB' 'SwapCached: 0 kB' 'Active: 494092 kB' 'Inactive: 1344316 kB' 'Active(anon): 131892 kB' 'Inactive(anon): 0 kB' 'Active(file): 362200 kB' 'Inactive(file): 1344316 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 240 kB' 'Writeback: 0 kB' 'AnonPages: 122972 kB' 'Mapped: 48796 kB' 'Shmem: 10464 kB' 'KReclaimable: 66844 kB' 'Slab: 140180 kB' 'SReclaimable: 66844 kB' 'SUnreclaim: 73336 kB' 'KernelStack: 6340 kB' 'PageTables: 4324 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 359368 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54692 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 149356 kB' 'DirectMap2M: 5093376 kB' 'DirectMap1G: 9437184 kB' 00:04:01.746 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.746 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.746 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.746 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.746 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.746 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.746 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.746 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.746 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.746 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.746 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.746 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.746 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.746 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.746 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.746 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.746 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.746 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.746 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.746 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.746 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.746 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.746 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.746 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.746 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.746 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.746 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.746 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.746 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.746 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.746 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.746 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.746 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.746 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.746 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.746 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.746 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.746 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.746 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.746 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.746 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.746 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.746 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.746 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.746 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.746 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.746 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.746 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.746 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.746 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.746 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.746 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.746 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.746 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.746 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.746 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.746 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.746 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.746 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.746 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.746 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.746 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.746 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.746 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.746 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.746 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.746 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.746 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.746 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.746 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.746 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.746 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.746 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.746 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.746 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.746 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.746 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.746 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.746 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.746 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.746 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.746 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.746 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.747 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.747 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.747 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.747 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.747 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.747 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.747 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.747 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.747 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.747 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.747 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.747 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.747 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.747 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.747 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.747 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.747 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.747 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.747 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.747 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.747 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.747 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.747 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.747 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.747 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.747 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.747 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.747 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.747 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.747 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.747 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.747 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.747 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.747 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.747 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.747 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.747 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.747 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.747 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.747 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.747 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.747 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.747 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.747 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.747 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.747 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.747 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.747 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.747 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.747 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.747 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.747 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.747 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.747 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.747 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.747 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.747 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.747 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.747 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.747 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.747 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.747 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.747 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.747 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.747 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.747 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.747 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.747 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.747 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.747 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.747 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.747 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.747 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.747 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.747 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.747 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.747 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.747 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.747 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:01.747 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:01.747 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:01.747 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:01.747 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:01.747 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:01.747 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:01.747 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:01.747 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:01.747 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:01.747 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:01.747 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:01.747 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:01.747 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.747 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.747 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 8008036 kB' 'MemAvailable: 9511036 kB' 'Buffers: 2436 kB' 'Cached: 1714544 kB' 'SwapCached: 0 kB' 'Active: 493576 kB' 'Inactive: 1344316 kB' 'Active(anon): 131376 kB' 'Inactive(anon): 0 kB' 'Active(file): 362200 kB' 'Inactive(file): 1344316 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 240 kB' 'Writeback: 0 kB' 'AnonPages: 122456 kB' 'Mapped: 48928 kB' 'Shmem: 10464 kB' 'KReclaimable: 66844 kB' 'Slab: 140180 kB' 'SReclaimable: 66844 kB' 'SUnreclaim: 73336 kB' 'KernelStack: 6356 kB' 'PageTables: 4356 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 359368 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54676 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 149356 kB' 'DirectMap2M: 5093376 kB' 'DirectMap1G: 9437184 kB' 00:04:01.747 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.747 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.747 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.747 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.747 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.747 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.747 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.747 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.747 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.747 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.747 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.747 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.747 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.747 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.747 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.747 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.747 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.747 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.747 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.747 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.747 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.747 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.748 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.748 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.748 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.748 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.748 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.748 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.748 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.748 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.748 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.748 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.748 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.748 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.748 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.748 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.748 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.748 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.748 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.748 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.748 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.748 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.748 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.748 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.748 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.748 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.748 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.748 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.748 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.748 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.748 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.748 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.748 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.748 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.748 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.748 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.748 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.748 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.748 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.748 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.748 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.748 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.748 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.748 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.748 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.748 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.748 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.748 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.748 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.748 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.748 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.748 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.748 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.748 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.748 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.748 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.748 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.748 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.748 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.748 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.748 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.748 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.748 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.748 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.748 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.748 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.748 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.748 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.748 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.748 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.748 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.748 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.748 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.748 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.748 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.748 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.748 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.748 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.748 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.748 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.748 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.748 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.748 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.748 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.748 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.748 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.748 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.748 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.748 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.748 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.748 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.748 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.748 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.748 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.748 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.748 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.748 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.748 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.748 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.748 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.748 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.748 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.010 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.010 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.010 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.010 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.010 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.010 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.010 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.010 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.010 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.010 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.010 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.010 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.010 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.010 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.010 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.010 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.010 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.010 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.010 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.010 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.010 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.010 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.010 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.010 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.010 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.010 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.010 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.010 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.010 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.010 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.010 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.010 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.010 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.010 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.010 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.010 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.010 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.010 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.010 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.010 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.010 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.010 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.010 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.010 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.010 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.010 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.010 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.010 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.010 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.010 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.010 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.010 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.010 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.010 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.010 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.010 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.010 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.010 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.010 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.010 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.010 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.010 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.010 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.010 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.010 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.010 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.010 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.010 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.010 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.010 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.010 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.010 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.010 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.010 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.010 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.010 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.010 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.010 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.010 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.010 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.010 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.010 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.010 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.010 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:02.010 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:02.010 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:02.011 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:02.011 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:02.011 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:02.011 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:02.011 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:02.011 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:02.011 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:02.011 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:02.011 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:02.011 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:02.011 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.011 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.011 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 8008036 kB' 'MemAvailable: 9511036 kB' 'Buffers: 2436 kB' 'Cached: 1714544 kB' 'SwapCached: 0 kB' 'Active: 493396 kB' 'Inactive: 1344316 kB' 'Active(anon): 131196 kB' 'Inactive(anon): 0 kB' 'Active(file): 362200 kB' 'Inactive(file): 1344316 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 240 kB' 'Writeback: 0 kB' 'AnonPages: 122376 kB' 'Mapped: 48608 kB' 'Shmem: 10464 kB' 'KReclaimable: 66844 kB' 'Slab: 140160 kB' 'SReclaimable: 66844 kB' 'SUnreclaim: 73316 kB' 'KernelStack: 6320 kB' 'PageTables: 4196 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 359368 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54628 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 149356 kB' 'DirectMap2M: 5093376 kB' 'DirectMap1G: 9437184 kB' 00:04:02.011 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.011 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.011 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.011 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.011 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.011 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.011 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.011 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.011 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.011 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.011 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.011 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.011 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.011 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.011 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.011 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.011 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.011 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.011 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.011 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.011 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.011 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.011 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.011 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.011 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.011 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.011 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.011 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.011 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.011 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.011 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.011 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.011 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.011 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.011 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.011 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.011 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.011 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.011 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.011 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.011 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.011 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.011 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.011 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.011 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.011 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.011 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.011 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.011 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.011 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.011 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.011 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.011 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.011 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.011 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.011 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.011 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.011 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.011 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.011 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.011 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.011 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.011 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.011 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.011 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.011 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.011 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.011 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.011 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.011 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.011 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.011 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.011 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.011 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.011 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.011 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.011 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.011 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.011 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.011 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.011 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.011 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.011 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.011 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.011 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.011 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.011 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.011 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.011 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.011 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.011 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.011 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.011 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.011 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.011 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.011 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.011 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.011 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.011 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.011 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.011 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.011 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.011 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.012 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.012 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.012 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.012 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.012 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.012 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.012 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.012 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.012 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.012 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.012 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.012 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.012 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.012 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.012 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.012 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.012 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.012 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.012 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.012 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.012 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.012 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.012 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.012 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.012 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.012 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.012 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.012 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.012 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.012 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.012 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.012 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.012 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.012 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.012 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.012 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.012 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.012 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.012 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.012 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.012 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.012 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.012 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.012 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.012 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.012 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.012 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.012 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.012 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.012 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.012 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.012 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.012 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.012 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.012 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.012 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.012 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.012 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.012 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.012 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.012 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.012 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.012 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.012 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.012 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.012 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.012 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.012 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.012 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.012 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.012 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.012 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.012 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.012 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.012 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.012 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.012 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.012 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.012 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.012 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.012 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.012 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.012 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.012 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.012 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.012 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.012 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.012 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.012 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.012 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.012 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.012 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.012 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.012 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.012 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.012 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.012 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.012 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.012 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:02.012 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:02.012 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:02.012 nr_hugepages=1024 00:04:02.012 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:02.012 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:02.012 resv_hugepages=0 00:04:02.012 surplus_hugepages=0 00:04:02.012 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:02.012 anon_hugepages=0 00:04:02.012 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:02.012 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:02.012 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:02.012 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:02.012 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:02.012 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:02.012 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:02.012 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:02.012 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:02.012 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:02.012 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:02.012 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:02.012 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:02.012 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.012 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.013 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 8008036 kB' 'MemAvailable: 9511036 kB' 'Buffers: 2436 kB' 'Cached: 1714544 kB' 'SwapCached: 0 kB' 'Active: 493136 kB' 'Inactive: 1344316 kB' 'Active(anon): 130936 kB' 'Inactive(anon): 0 kB' 'Active(file): 362200 kB' 'Inactive(file): 1344316 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 240 kB' 'Writeback: 0 kB' 'AnonPages: 122120 kB' 'Mapped: 48608 kB' 'Shmem: 10464 kB' 'KReclaimable: 66844 kB' 'Slab: 140152 kB' 'SReclaimable: 66844 kB' 'SUnreclaim: 73308 kB' 'KernelStack: 6320 kB' 'PageTables: 4196 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 359368 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54628 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 149356 kB' 'DirectMap2M: 5093376 kB' 'DirectMap1G: 9437184 kB' 00:04:02.013 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.013 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.013 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.013 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.013 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.013 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.013 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.013 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.013 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.013 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.013 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.013 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.013 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.013 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.013 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.013 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.013 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.013 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.013 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.013 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.013 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.013 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.013 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.013 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.013 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.013 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.013 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.013 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.013 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.013 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.013 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.013 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.013 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.013 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.013 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.013 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.013 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.013 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.013 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.013 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.013 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.013 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.013 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.013 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.013 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.013 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.013 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.013 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.013 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.013 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.013 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.013 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.013 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.013 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.013 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.013 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.013 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.013 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.013 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.013 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.013 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.013 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.013 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.013 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.013 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.013 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.013 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.013 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.013 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.013 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.013 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.013 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.013 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.013 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.013 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.013 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.013 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.013 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.013 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.013 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.013 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.013 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.013 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.013 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.013 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.013 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.013 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.013 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.013 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.013 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.013 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.013 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.013 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.013 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.013 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.013 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.013 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.013 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.013 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.013 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.013 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.013 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.013 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.013 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.013 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.013 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.013 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.013 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.013 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.013 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.013 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.013 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.013 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.013 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.013 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.013 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.013 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.013 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.013 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.013 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.013 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.013 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.013 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.013 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.014 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.014 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.014 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.014 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.014 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.014 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.014 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.014 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.014 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.014 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.014 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.014 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.014 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.014 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.014 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.014 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.014 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.014 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.014 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.014 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.014 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.014 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.014 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.014 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.014 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.014 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.014 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.014 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.014 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.014 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.014 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.014 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.014 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.014 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.014 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.014 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.014 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.014 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.014 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.014 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.014 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.014 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.014 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.014 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.014 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.014 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.014 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.014 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.014 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.014 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.014 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.014 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.014 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.014 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.014 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.014 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.014 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.014 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.014 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.014 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.014 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.014 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.014 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.014 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.014 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.014 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.014 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.014 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.014 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.014 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:04:02.014 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:02.014 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:02.014 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:02.014 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:04:02.014 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:02.014 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:02.014 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:02.014 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:02.014 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:02.014 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:02.014 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:02.014 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:02.014 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:04:02.014 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:02.014 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:02.014 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:02.014 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:02.014 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:02.014 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:02.014 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:02.014 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.014 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.014 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 8008036 kB' 'MemUsed: 4233936 kB' 'SwapCached: 0 kB' 'Active: 493092 kB' 'Inactive: 1344316 kB' 'Active(anon): 130892 kB' 'Inactive(anon): 0 kB' 'Active(file): 362200 kB' 'Inactive(file): 1344316 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 240 kB' 'Writeback: 0 kB' 'FilePages: 1716980 kB' 'Mapped: 48608 kB' 'AnonPages: 122336 kB' 'Shmem: 10464 kB' 'KernelStack: 6304 kB' 'PageTables: 4144 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 66844 kB' 'Slab: 140152 kB' 'SReclaimable: 66844 kB' 'SUnreclaim: 73308 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:02.014 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.014 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.014 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.014 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.014 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.015 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.015 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.015 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.015 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.015 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.015 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.015 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.015 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.015 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.015 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.015 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.015 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.015 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.015 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.015 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.015 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.015 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.015 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.015 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.015 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.015 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.015 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.015 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.015 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.015 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.015 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.015 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.015 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.015 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.015 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.015 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.015 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.015 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.015 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.015 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.015 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.015 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.015 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.015 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.015 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.015 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.015 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.015 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.015 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.015 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.015 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.015 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.015 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.015 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.015 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.015 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.015 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.015 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.015 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.015 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.015 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.015 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.015 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.015 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.015 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.015 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.015 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.015 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.015 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.015 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.015 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.015 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.015 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.015 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.015 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.015 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.015 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.015 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.015 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.015 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.015 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.015 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.015 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.015 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.015 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.015 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.015 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.015 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.015 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.015 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.015 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.015 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.015 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.015 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.015 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.015 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.015 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.015 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.015 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.015 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.015 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.015 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.015 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.015 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.015 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.015 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.015 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.015 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.015 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.015 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.015 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.015 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.015 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.015 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.015 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.015 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.015 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.015 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.015 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.015 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.015 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.015 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.015 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.015 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.015 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.015 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.015 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.015 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.015 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.015 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.015 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.015 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.015 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.015 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.015 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.016 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.016 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.016 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.016 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.016 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.016 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.016 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.016 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.016 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.016 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.016 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:02.016 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:02.016 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:02.016 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:02.016 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:02.016 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:02.016 node0=1024 expecting 1024 00:04:02.016 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:02.016 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:02.016 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:04:02.016 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # NRHUGE=512 00:04:02.016 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # setup output 00:04:02.016 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:02.016 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:02.277 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:02.277 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:02.277 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:02.277 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:04:02.277 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:04:02.277 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:04:02.277 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:02.277 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:02.277 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:02.277 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:02.277 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:02.277 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:02.277 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:02.277 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:02.277 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:02.277 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:02.277 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:02.277 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:02.277 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:02.277 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:02.277 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:02.277 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:02.277 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.277 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.277 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 8003752 kB' 'MemAvailable: 9506752 kB' 'Buffers: 2436 kB' 'Cached: 1714544 kB' 'SwapCached: 0 kB' 'Active: 493812 kB' 'Inactive: 1344316 kB' 'Active(anon): 131612 kB' 'Inactive(anon): 0 kB' 'Active(file): 362200 kB' 'Inactive(file): 1344316 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 248 kB' 'Writeback: 0 kB' 'AnonPages: 122728 kB' 'Mapped: 48728 kB' 'Shmem: 10464 kB' 'KReclaimable: 66844 kB' 'Slab: 140164 kB' 'SReclaimable: 66844 kB' 'SUnreclaim: 73320 kB' 'KernelStack: 6308 kB' 'PageTables: 4244 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 359368 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54692 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 149356 kB' 'DirectMap2M: 5093376 kB' 'DirectMap1G: 9437184 kB' 00:04:02.277 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.277 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.277 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.277 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.277 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.277 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.277 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.277 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.277 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.277 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.277 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.277 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.277 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.277 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.277 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.277 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.277 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.277 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.277 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.277 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.277 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.277 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.277 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.277 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.277 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.277 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.277 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.277 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.277 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.277 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.277 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.277 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.277 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.277 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.277 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.277 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.277 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.277 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.277 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.277 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.277 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.277 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.277 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.277 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.277 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.277 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.277 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.277 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.278 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.278 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.278 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.278 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.278 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.278 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.278 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.278 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.278 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.278 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.278 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.278 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.278 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.278 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.278 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.278 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.278 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.278 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.278 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.278 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.278 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.278 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.278 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.278 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.278 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.278 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.278 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.278 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.278 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.278 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.278 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.278 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.278 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.278 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.278 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.278 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.278 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.278 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.278 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.278 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.278 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.278 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.278 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.278 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.278 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.278 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.278 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.278 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.278 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.278 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.278 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.278 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.278 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.278 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.278 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.278 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.278 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.278 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.278 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.278 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.278 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.278 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.278 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.278 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.278 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.278 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.278 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.278 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.278 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.278 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.278 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.278 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.278 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.278 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.278 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.278 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.278 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.278 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.278 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.278 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.278 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.278 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.278 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.278 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.278 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.278 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.278 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.278 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.278 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.278 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.278 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.278 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.278 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.278 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.278 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.278 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.278 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.278 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.278 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.278 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.278 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.278 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.278 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.278 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.278 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.278 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.278 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.278 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.278 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.278 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.278 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.278 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.278 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.278 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:02.278 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:02.278 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:02.278 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:02.278 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:02.278 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:02.278 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:02.278 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:02.278 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:02.278 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:02.278 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:02.278 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:02.278 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:02.278 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.278 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.279 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 8003752 kB' 'MemAvailable: 9506752 kB' 'Buffers: 2436 kB' 'Cached: 1714544 kB' 'SwapCached: 0 kB' 'Active: 493824 kB' 'Inactive: 1344316 kB' 'Active(anon): 131624 kB' 'Inactive(anon): 0 kB' 'Active(file): 362200 kB' 'Inactive(file): 1344316 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 248 kB' 'Writeback: 0 kB' 'AnonPages: 122700 kB' 'Mapped: 48608 kB' 'Shmem: 10464 kB' 'KReclaimable: 66844 kB' 'Slab: 140164 kB' 'SReclaimable: 66844 kB' 'SUnreclaim: 73320 kB' 'KernelStack: 6320 kB' 'PageTables: 4204 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 359368 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54676 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 149356 kB' 'DirectMap2M: 5093376 kB' 'DirectMap1G: 9437184 kB' 00:04:02.279 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.279 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.279 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.279 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.279 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.279 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.279 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.279 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.279 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.279 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.279 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.279 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.279 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.279 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.279 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.279 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.279 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.279 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.279 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.279 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.279 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.279 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.279 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.279 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.279 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.279 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.279 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.279 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.279 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.279 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.279 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.279 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.279 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.279 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.279 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.279 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.279 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.279 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.279 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.279 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.279 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.279 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.279 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.279 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.279 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.279 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.279 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.279 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.279 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.279 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.279 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.279 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.279 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.279 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.279 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.279 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.279 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.279 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.279 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.279 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.279 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.279 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.279 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.279 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.279 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.279 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.279 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.279 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.279 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.279 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.279 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.279 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.279 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.279 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.279 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.279 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.279 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.279 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.279 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.279 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.279 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.279 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.279 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.279 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.279 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.279 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.279 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.279 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.279 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.279 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.279 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.279 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.279 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.279 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.279 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.279 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.279 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.279 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.279 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.279 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.279 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.279 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.279 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.279 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.279 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.279 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.279 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.279 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.279 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.279 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.279 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.279 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.279 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.279 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.279 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.279 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.279 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.279 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.280 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.280 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.280 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.280 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.280 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.280 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.280 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.280 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.280 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.280 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.280 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.280 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.280 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.280 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.280 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.280 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.280 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.280 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.280 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.280 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.280 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.280 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.280 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.280 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.280 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.280 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.280 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.280 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.280 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.280 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.280 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.280 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.280 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.280 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.280 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.280 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.280 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.280 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.280 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.280 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.280 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.280 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.280 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.280 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.280 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.280 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.280 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.280 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.280 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.280 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.280 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.280 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.280 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.280 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.280 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.280 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.280 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.280 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.280 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.280 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.280 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.280 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.280 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.280 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.280 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.280 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.280 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.280 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.280 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.280 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.280 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.280 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.280 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.280 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.280 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.280 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.280 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.280 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.280 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.280 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.280 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.280 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.280 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.280 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.280 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.280 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.280 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.280 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:02.280 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:02.280 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:02.280 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:02.280 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:02.280 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:02.280 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:02.280 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:02.280 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:02.280 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:02.280 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:02.280 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:02.280 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:02.280 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.280 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.280 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 8003752 kB' 'MemAvailable: 9506752 kB' 'Buffers: 2436 kB' 'Cached: 1714544 kB' 'SwapCached: 0 kB' 'Active: 493184 kB' 'Inactive: 1344316 kB' 'Active(anon): 130984 kB' 'Inactive(anon): 0 kB' 'Active(file): 362200 kB' 'Inactive(file): 1344316 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 248 kB' 'Writeback: 0 kB' 'AnonPages: 122316 kB' 'Mapped: 48608 kB' 'Shmem: 10464 kB' 'KReclaimable: 66844 kB' 'Slab: 140164 kB' 'SReclaimable: 66844 kB' 'SUnreclaim: 73320 kB' 'KernelStack: 6272 kB' 'PageTables: 4044 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 359368 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54676 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 149356 kB' 'DirectMap2M: 5093376 kB' 'DirectMap1G: 9437184 kB' 00:04:02.280 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.280 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.280 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.280 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.280 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.280 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.280 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.280 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.280 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.280 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.280 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.280 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.280 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.280 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.281 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.281 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.281 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.281 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.281 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.281 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.281 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.281 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.281 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.281 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.281 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.281 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.281 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.281 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.281 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.281 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.281 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.281 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.281 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.281 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.281 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.281 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.281 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.281 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.281 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.281 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.281 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.281 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.281 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.281 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.281 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.281 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.281 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.281 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.281 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.281 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.281 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.281 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.281 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.281 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.281 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.542 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.542 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.542 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.542 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.542 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.542 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.542 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.542 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.542 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.542 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.542 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.542 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.542 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.542 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.542 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.542 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.542 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.542 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.542 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.542 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.542 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.542 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.542 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.542 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.542 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.542 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.542 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.542 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.542 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.542 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.542 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.542 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.542 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.542 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.542 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.542 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.542 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.542 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.542 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.542 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.542 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.542 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.542 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.542 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.542 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.542 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.543 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.543 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.543 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.543 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.543 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.543 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.543 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.543 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.543 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.543 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.543 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.543 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.543 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.543 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.543 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.543 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.543 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.543 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.543 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.543 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.543 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.543 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.543 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.543 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.543 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.543 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.543 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.543 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.543 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.543 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.543 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.543 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.543 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.543 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.543 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.543 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.543 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.543 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.543 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.543 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.543 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.543 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.543 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.543 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.543 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.543 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.543 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.543 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.543 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.543 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.543 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.543 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.543 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.543 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.543 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.543 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.543 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.543 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.543 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.543 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.543 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.543 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.543 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.543 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.543 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.543 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.543 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.543 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.543 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.543 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.543 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.543 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.543 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.543 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.543 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.543 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.543 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.543 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.543 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.543 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.543 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.543 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.543 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.543 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.543 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.543 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.543 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.543 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.543 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.543 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.543 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.543 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.543 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.543 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.543 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.543 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.543 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.543 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.543 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.543 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.543 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:02.543 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:02.543 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:02.543 nr_hugepages=1024 00:04:02.543 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:02.543 resv_hugepages=0 00:04:02.543 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:02.543 surplus_hugepages=0 00:04:02.543 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:02.543 anon_hugepages=0 00:04:02.543 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:02.543 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:02.543 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:02.543 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:02.543 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:02.543 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:02.543 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:02.543 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:02.543 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:02.543 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:02.543 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:02.543 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:02.543 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:02.543 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.543 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.544 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 8003752 kB' 'MemAvailable: 9506752 kB' 'Buffers: 2436 kB' 'Cached: 1714544 kB' 'SwapCached: 0 kB' 'Active: 493028 kB' 'Inactive: 1344316 kB' 'Active(anon): 130828 kB' 'Inactive(anon): 0 kB' 'Active(file): 362200 kB' 'Inactive(file): 1344316 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 248 kB' 'Writeback: 0 kB' 'AnonPages: 122160 kB' 'Mapped: 48608 kB' 'Shmem: 10464 kB' 'KReclaimable: 66844 kB' 'Slab: 140164 kB' 'SReclaimable: 66844 kB' 'SUnreclaim: 73320 kB' 'KernelStack: 6240 kB' 'PageTables: 3948 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 359000 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54660 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 149356 kB' 'DirectMap2M: 5093376 kB' 'DirectMap1G: 9437184 kB' 00:04:02.544 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.544 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.544 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.544 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.544 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.544 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.544 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.544 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.544 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.544 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.544 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.544 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.544 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.544 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.544 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.544 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.544 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.544 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.544 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.544 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.544 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.544 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.544 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.544 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.544 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.544 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.544 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.544 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.544 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.544 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.544 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.544 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.544 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.544 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.544 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.544 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.544 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.544 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.544 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.544 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.544 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.544 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.544 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.544 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.544 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.544 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.544 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.544 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.544 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.544 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.544 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.544 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.544 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.544 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.544 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.544 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.544 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.544 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.544 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.544 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.544 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.544 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.544 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.544 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.544 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.544 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.544 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.544 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.544 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.544 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.544 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.544 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.544 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.544 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.544 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.544 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.544 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.544 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.544 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.544 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.544 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.544 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.544 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.544 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.544 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.544 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.544 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.544 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.544 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.544 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.544 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.544 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.544 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.544 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.544 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.544 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.544 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.544 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.544 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.544 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.544 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.544 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.544 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.544 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.544 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.544 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.544 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.544 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.544 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.544 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.544 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.544 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.544 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.544 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.544 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.544 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.544 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.544 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.544 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.544 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.545 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.545 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.545 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.545 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.545 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.545 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.545 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.545 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.545 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.545 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.545 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.545 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.545 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.545 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.545 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.545 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.545 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.545 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.545 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.545 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.545 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.545 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.545 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.545 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.545 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.545 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.545 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.545 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.545 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.545 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.545 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.545 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.545 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.545 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.545 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.545 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.545 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.545 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.545 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.545 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.545 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.545 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.545 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.545 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.545 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.545 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.545 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.545 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.545 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.545 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.545 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.545 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.545 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.545 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.545 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.545 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.545 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.545 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.545 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.545 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.545 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.545 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.545 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.545 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.545 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.545 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.545 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.545 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.545 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.545 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.545 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.545 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.545 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.545 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:04:02.545 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:02.545 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:02.545 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:02.545 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:04:02.545 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:02.545 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:02.545 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:02.545 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:02.545 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:02.545 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:02.545 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:02.545 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:02.545 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:04:02.545 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:02.545 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:02.545 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:02.545 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:02.545 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:02.545 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:02.545 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:02.545 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.545 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 8003752 kB' 'MemUsed: 4238220 kB' 'SwapCached: 0 kB' 'Active: 493088 kB' 'Inactive: 1344316 kB' 'Active(anon): 130888 kB' 'Inactive(anon): 0 kB' 'Active(file): 362200 kB' 'Inactive(file): 1344316 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 248 kB' 'Writeback: 0 kB' 'FilePages: 1716980 kB' 'Mapped: 48608 kB' 'AnonPages: 122272 kB' 'Shmem: 10464 kB' 'KernelStack: 6308 kB' 'PageTables: 3940 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 66844 kB' 'Slab: 140168 kB' 'SReclaimable: 66844 kB' 'SUnreclaim: 73324 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:02.545 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.545 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.545 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.545 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.545 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.545 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.545 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.545 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.545 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.545 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.545 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.545 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.545 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.545 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.545 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.545 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.545 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.545 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.545 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.545 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.545 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.545 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.545 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.545 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.546 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.546 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.546 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.546 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.546 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.546 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.546 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.546 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.546 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.546 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.546 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.546 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.546 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.546 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.546 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.546 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.546 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.546 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.546 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.546 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.546 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.546 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.546 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.546 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.546 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.546 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.546 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.546 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.546 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.546 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.546 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.546 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.546 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.546 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.546 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.546 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.546 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.546 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.546 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.546 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.546 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.546 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.546 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.546 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.546 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.546 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.546 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.546 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.546 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.546 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.546 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.546 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.546 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.546 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.546 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.546 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.546 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.546 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.546 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.546 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.546 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.546 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.546 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.546 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.546 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.546 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.546 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.546 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.546 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.546 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.546 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.546 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.546 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.546 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.546 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.546 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.546 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.546 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.546 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.546 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.546 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.546 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.546 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.546 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.546 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.546 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.546 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.546 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.546 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.546 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.546 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.546 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.546 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.546 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.546 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.546 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.546 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.546 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.546 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.546 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.546 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.546 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.546 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.546 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.546 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.546 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.546 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.547 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.547 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.547 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.547 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.547 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.547 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.547 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.547 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.547 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.547 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.547 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.547 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.547 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.547 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.547 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.547 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:02.547 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:02.547 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:02.547 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:02.547 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:02.547 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:02.547 node0=1024 expecting 1024 00:04:02.547 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:02.547 11:22:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:02.547 00:04:02.547 real 0m1.040s 00:04:02.547 user 0m0.541s 00:04:02.547 sys 0m0.564s 00:04:02.547 11:22:39 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:02.547 11:22:39 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:02.547 ************************************ 00:04:02.547 END TEST no_shrink_alloc 00:04:02.547 ************************************ 00:04:02.547 11:22:39 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:04:02.547 11:22:39 setup.sh.hugepages -- setup/hugepages.sh@217 -- # clear_hp 00:04:02.547 11:22:39 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:04:02.547 11:22:39 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:02.547 11:22:39 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:02.547 11:22:39 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:02.547 11:22:39 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:02.547 11:22:39 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:02.547 11:22:39 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:04:02.547 11:22:39 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:04:02.547 00:04:02.547 real 0m4.580s 00:04:02.547 user 0m2.250s 00:04:02.547 sys 0m2.343s 00:04:02.547 11:22:39 setup.sh.hugepages -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:02.547 11:22:39 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:02.547 ************************************ 00:04:02.547 END TEST hugepages 00:04:02.547 ************************************ 00:04:02.547 11:22:39 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:04:02.547 11:22:39 setup.sh -- setup/test-setup.sh@14 -- # run_test driver /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:04:02.547 11:22:39 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:02.547 11:22:39 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:02.547 11:22:39 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:02.547 ************************************ 00:04:02.547 START TEST driver 00:04:02.547 ************************************ 00:04:02.547 11:22:39 setup.sh.driver -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:04:02.547 * Looking for test storage... 00:04:02.547 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:04:02.547 11:22:40 setup.sh.driver -- setup/driver.sh@68 -- # setup reset 00:04:02.547 11:22:40 setup.sh.driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:02.547 11:22:40 setup.sh.driver -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:03.113 11:22:40 setup.sh.driver -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:04:03.113 11:22:40 setup.sh.driver -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:03.113 11:22:40 setup.sh.driver -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:03.113 11:22:40 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:04:03.113 ************************************ 00:04:03.113 START TEST guess_driver 00:04:03.113 ************************************ 00:04:03.113 11:22:40 setup.sh.driver.guess_driver -- common/autotest_common.sh@1123 -- # guess_driver 00:04:03.113 11:22:40 setup.sh.driver.guess_driver -- setup/driver.sh@46 -- # local driver setup_driver marker 00:04:03.113 11:22:40 setup.sh.driver.guess_driver -- setup/driver.sh@47 -- # local fail=0 00:04:03.372 11:22:40 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # pick_driver 00:04:03.372 11:22:40 setup.sh.driver.guess_driver -- setup/driver.sh@36 -- # vfio 00:04:03.372 11:22:40 setup.sh.driver.guess_driver -- setup/driver.sh@21 -- # local iommu_grups 00:04:03.372 11:22:40 setup.sh.driver.guess_driver -- setup/driver.sh@22 -- # local unsafe_vfio 00:04:03.372 11:22:40 setup.sh.driver.guess_driver -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:04:03.372 11:22:40 setup.sh.driver.guess_driver -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:04:03.372 11:22:40 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # (( 0 > 0 )) 00:04:03.372 11:22:40 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # [[ '' == Y ]] 00:04:03.372 11:22:40 setup.sh.driver.guess_driver -- setup/driver.sh@32 -- # return 1 00:04:03.372 11:22:40 setup.sh.driver.guess_driver -- setup/driver.sh@38 -- # uio 00:04:03.372 11:22:40 setup.sh.driver.guess_driver -- setup/driver.sh@17 -- # is_driver uio_pci_generic 00:04:03.372 11:22:40 setup.sh.driver.guess_driver -- setup/driver.sh@14 -- # mod uio_pci_generic 00:04:03.372 11:22:40 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # dep uio_pci_generic 00:04:03.372 11:22:40 setup.sh.driver.guess_driver -- setup/driver.sh@11 -- # modprobe --show-depends uio_pci_generic 00:04:03.372 11:22:40 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/uio/uio.ko.xz 00:04:03.372 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/uio/uio_pci_generic.ko.xz == *\.\k\o* ]] 00:04:03.372 11:22:40 setup.sh.driver.guess_driver -- setup/driver.sh@39 -- # echo uio_pci_generic 00:04:03.372 Looking for driver=uio_pci_generic 00:04:03.372 11:22:40 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # driver=uio_pci_generic 00:04:03.372 11:22:40 setup.sh.driver.guess_driver -- setup/driver.sh@51 -- # [[ uio_pci_generic == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:04:03.372 11:22:40 setup.sh.driver.guess_driver -- setup/driver.sh@56 -- # echo 'Looking for driver=uio_pci_generic' 00:04:03.372 11:22:40 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:03.372 11:22:40 setup.sh.driver.guess_driver -- setup/driver.sh@45 -- # setup output config 00:04:03.372 11:22:40 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ output == output ]] 00:04:03.372 11:22:40 setup.sh.driver.guess_driver -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:03.939 11:22:41 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ devices: == \-\> ]] 00:04:03.939 11:22:41 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # continue 00:04:03.939 11:22:41 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:03.939 11:22:41 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:03.939 11:22:41 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:04:03.939 11:22:41 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:03.939 11:22:41 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:03.939 11:22:41 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:04:03.939 11:22:41 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:03.939 11:22:41 setup.sh.driver.guess_driver -- setup/driver.sh@64 -- # (( fail == 0 )) 00:04:03.939 11:22:41 setup.sh.driver.guess_driver -- setup/driver.sh@65 -- # setup reset 00:04:03.939 11:22:41 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:03.939 11:22:41 setup.sh.driver.guess_driver -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:04.505 00:04:04.505 real 0m1.355s 00:04:04.505 user 0m0.491s 00:04:04.505 sys 0m0.860s 00:04:04.505 11:22:41 setup.sh.driver.guess_driver -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:04.505 11:22:41 setup.sh.driver.guess_driver -- common/autotest_common.sh@10 -- # set +x 00:04:04.505 ************************************ 00:04:04.505 END TEST guess_driver 00:04:04.505 ************************************ 00:04:04.506 11:22:41 setup.sh.driver -- common/autotest_common.sh@1142 -- # return 0 00:04:04.506 00:04:04.506 real 0m2.045s 00:04:04.506 user 0m0.707s 00:04:04.506 sys 0m1.387s 00:04:04.506 11:22:41 setup.sh.driver -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:04.506 11:22:41 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:04:04.506 ************************************ 00:04:04.506 END TEST driver 00:04:04.506 ************************************ 00:04:04.763 11:22:42 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:04:04.763 11:22:42 setup.sh -- setup/test-setup.sh@15 -- # run_test devices /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:04:04.763 11:22:42 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:04.763 11:22:42 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:04.763 11:22:42 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:04.763 ************************************ 00:04:04.763 START TEST devices 00:04:04.763 ************************************ 00:04:04.763 11:22:42 setup.sh.devices -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:04:04.763 * Looking for test storage... 00:04:04.763 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:04:04.763 11:22:42 setup.sh.devices -- setup/devices.sh@190 -- # trap cleanup EXIT 00:04:04.763 11:22:42 setup.sh.devices -- setup/devices.sh@192 -- # setup reset 00:04:04.763 11:22:42 setup.sh.devices -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:04.763 11:22:42 setup.sh.devices -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:05.697 11:22:42 setup.sh.devices -- setup/devices.sh@194 -- # get_zoned_devs 00:04:05.697 11:22:42 setup.sh.devices -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:04:05.697 11:22:42 setup.sh.devices -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:04:05.697 11:22:42 setup.sh.devices -- common/autotest_common.sh@1670 -- # local nvme bdf 00:04:05.697 11:22:42 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:04:05.697 11:22:42 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:04:05.697 11:22:42 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:04:05.697 11:22:42 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:05.697 11:22:42 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:04:05.697 11:22:42 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:04:05.697 11:22:42 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n2 00:04:05.697 11:22:42 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme0n2 00:04:05.697 11:22:42 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:04:05.697 11:22:42 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:04:05.697 11:22:42 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:04:05.697 11:22:42 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n3 00:04:05.697 11:22:42 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme0n3 00:04:05.697 11:22:42 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:04:05.697 11:22:42 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:04:05.697 11:22:42 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:04:05.697 11:22:42 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n1 00:04:05.697 11:22:42 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme1n1 00:04:05.698 11:22:42 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:04:05.698 11:22:42 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:04:05.698 11:22:42 setup.sh.devices -- setup/devices.sh@196 -- # blocks=() 00:04:05.698 11:22:42 setup.sh.devices -- setup/devices.sh@196 -- # declare -a blocks 00:04:05.698 11:22:42 setup.sh.devices -- setup/devices.sh@197 -- # blocks_to_pci=() 00:04:05.698 11:22:42 setup.sh.devices -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:04:05.698 11:22:42 setup.sh.devices -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:04:05.698 11:22:42 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:05.698 11:22:42 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:04:05.698 11:22:42 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:04:05.698 11:22:42 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:11.0 00:04:05.698 11:22:42 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\1\.\0* ]] 00:04:05.698 11:22:42 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:04:05.698 11:22:42 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:04:05.698 11:22:42 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:04:05.698 No valid GPT data, bailing 00:04:05.698 11:22:42 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:05.698 11:22:42 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:04:05.698 11:22:42 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:04:05.698 11:22:42 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:04:05.698 11:22:42 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n1 00:04:05.698 11:22:42 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:04:05.698 11:22:42 setup.sh.devices -- setup/common.sh@80 -- # echo 4294967296 00:04:05.698 11:22:42 setup.sh.devices -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:04:05.698 11:22:42 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:04:05.698 11:22:42 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:11.0 00:04:05.698 11:22:42 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:05.698 11:22:42 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n2 00:04:05.698 11:22:42 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:04:05.698 11:22:42 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:11.0 00:04:05.698 11:22:42 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\1\.\0* ]] 00:04:05.698 11:22:42 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n2 00:04:05.698 11:22:42 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n2 pt 00:04:05.698 11:22:42 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:04:05.698 No valid GPT data, bailing 00:04:05.698 11:22:42 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:04:05.698 11:22:43 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:04:05.698 11:22:43 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:04:05.698 11:22:43 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n2 00:04:05.698 11:22:43 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n2 00:04:05.698 11:22:43 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n2 ]] 00:04:05.698 11:22:43 setup.sh.devices -- setup/common.sh@80 -- # echo 4294967296 00:04:05.698 11:22:43 setup.sh.devices -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:04:05.698 11:22:43 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:04:05.698 11:22:43 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:11.0 00:04:05.698 11:22:43 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:05.698 11:22:43 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n3 00:04:05.698 11:22:43 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:04:05.698 11:22:43 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:11.0 00:04:05.698 11:22:43 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\1\.\0* ]] 00:04:05.698 11:22:43 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n3 00:04:05.698 11:22:43 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n3 pt 00:04:05.698 11:22:43 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:04:05.698 No valid GPT data, bailing 00:04:05.698 11:22:43 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:04:05.698 11:22:43 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:04:05.698 11:22:43 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:04:05.698 11:22:43 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n3 00:04:05.698 11:22:43 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n3 00:04:05.698 11:22:43 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n3 ]] 00:04:05.698 11:22:43 setup.sh.devices -- setup/common.sh@80 -- # echo 4294967296 00:04:05.698 11:22:43 setup.sh.devices -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:04:05.698 11:22:43 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:04:05.698 11:22:43 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:11.0 00:04:05.698 11:22:43 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:05.698 11:22:43 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme1n1 00:04:05.698 11:22:43 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme1 00:04:05.698 11:22:43 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:10.0 00:04:05.698 11:22:43 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\0\.\0* ]] 00:04:05.698 11:22:43 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme1n1 00:04:05.698 11:22:43 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme1n1 pt 00:04:05.698 11:22:43 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:04:05.698 No valid GPT data, bailing 00:04:05.698 11:22:43 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:04:05.698 11:22:43 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:04:05.698 11:22:43 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:04:05.698 11:22:43 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme1n1 00:04:05.698 11:22:43 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme1n1 00:04:05.698 11:22:43 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme1n1 ]] 00:04:05.698 11:22:43 setup.sh.devices -- setup/common.sh@80 -- # echo 5368709120 00:04:05.698 11:22:43 setup.sh.devices -- setup/devices.sh@204 -- # (( 5368709120 >= min_disk_size )) 00:04:05.698 11:22:43 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:04:05.698 11:22:43 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:10.0 00:04:05.698 11:22:43 setup.sh.devices -- setup/devices.sh@209 -- # (( 4 > 0 )) 00:04:05.698 11:22:43 setup.sh.devices -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:04:05.698 11:22:43 setup.sh.devices -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:04:05.698 11:22:43 setup.sh.devices -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:05.698 11:22:43 setup.sh.devices -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:05.698 11:22:43 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:04:05.698 ************************************ 00:04:05.698 START TEST nvme_mount 00:04:05.698 ************************************ 00:04:05.698 11:22:43 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1123 -- # nvme_mount 00:04:05.698 11:22:43 setup.sh.devices.nvme_mount -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:04:05.698 11:22:43 setup.sh.devices.nvme_mount -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:04:05.698 11:22:43 setup.sh.devices.nvme_mount -- setup/devices.sh@97 -- # nvme_mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:05.698 11:22:43 setup.sh.devices.nvme_mount -- setup/devices.sh@98 -- # nvme_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:05.698 11:22:43 setup.sh.devices.nvme_mount -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:04:05.956 11:22:43 setup.sh.devices.nvme_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:04:05.956 11:22:43 setup.sh.devices.nvme_mount -- setup/common.sh@40 -- # local part_no=1 00:04:05.956 11:22:43 setup.sh.devices.nvme_mount -- setup/common.sh@41 -- # local size=1073741824 00:04:05.956 11:22:43 setup.sh.devices.nvme_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:04:05.956 11:22:43 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # parts=() 00:04:05.956 11:22:43 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # local parts 00:04:05.956 11:22:43 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:04:05.956 11:22:43 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:05.956 11:22:43 setup.sh.devices.nvme_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:05.956 11:22:43 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part++ )) 00:04:05.956 11:22:43 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:05.956 11:22:43 setup.sh.devices.nvme_mount -- setup/common.sh@51 -- # (( size /= 4096 )) 00:04:05.956 11:22:43 setup.sh.devices.nvme_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:04:05.956 11:22:43 setup.sh.devices.nvme_mount -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:04:06.917 Creating new GPT entries in memory. 00:04:06.917 GPT data structures destroyed! You may now partition the disk using fdisk or 00:04:06.917 other utilities. 00:04:06.917 11:22:44 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:04:06.917 11:22:44 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:06.917 11:22:44 setup.sh.devices.nvme_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:06.917 11:22:44 setup.sh.devices.nvme_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:06.917 11:22:44 setup.sh.devices.nvme_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:264191 00:04:07.849 Creating new GPT entries in memory. 00:04:07.849 The operation has completed successfully. 00:04:07.849 11:22:45 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part++ )) 00:04:07.849 11:22:45 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:07.849 11:22:45 setup.sh.devices.nvme_mount -- setup/common.sh@62 -- # wait 58974 00:04:07.849 11:22:45 setup.sh.devices.nvme_mount -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:07.849 11:22:45 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size= 00:04:07.849 11:22:45 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:07.849 11:22:45 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:04:07.849 11:22:45 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:04:07.849 11:22:45 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:07.849 11:22:45 setup.sh.devices.nvme_mount -- setup/devices.sh@105 -- # verify 0000:00:11.0 nvme0n1:nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:07.849 11:22:45 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:04:07.849 11:22:45 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:04:07.849 11:22:45 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:07.849 11:22:45 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:07.849 11:22:45 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:04:07.849 11:22:45 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:07.849 11:22:45 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:04:07.849 11:22:45 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:04:07.849 11:22:45 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:07.849 11:22:45 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:04:07.849 11:22:45 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:04:07.849 11:22:45 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:07.849 11:22:45 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:08.106 11:22:45 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:08.106 11:22:45 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:04:08.106 11:22:45 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:04:08.106 11:22:45 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:08.106 11:22:45 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:08.106 11:22:45 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:08.364 11:22:45 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:08.364 11:22:45 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:08.364 11:22:45 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:08.364 11:22:45 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:08.364 11:22:45 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:08.364 11:22:45 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 00:04:08.364 11:22:45 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:08.364 11:22:45 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:08.364 11:22:45 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:08.364 11:22:45 setup.sh.devices.nvme_mount -- setup/devices.sh@110 -- # cleanup_nvme 00:04:08.364 11:22:45 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:08.364 11:22:45 setup.sh.devices.nvme_mount -- setup/devices.sh@21 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:08.364 11:22:45 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:08.364 11:22:45 setup.sh.devices.nvme_mount -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:04:08.364 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:08.364 11:22:45 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:08.364 11:22:45 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:08.622 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:04:08.622 /dev/nvme0n1: 8 bytes were erased at offset 0xfffff000 (gpt): 45 46 49 20 50 41 52 54 00:04:08.622 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:04:08.622 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:04:08.622 11:22:46 setup.sh.devices.nvme_mount -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 1024M 00:04:08.622 11:22:46 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size=1024M 00:04:08.622 11:22:46 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:08.622 11:22:46 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:04:08.622 11:22:46 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:04:08.622 11:22:46 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:08.622 11:22:46 setup.sh.devices.nvme_mount -- setup/devices.sh@116 -- # verify 0000:00:11.0 nvme0n1:nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:08.622 11:22:46 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:04:08.622 11:22:46 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:04:08.622 11:22:46 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:08.622 11:22:46 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:08.622 11:22:46 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:04:08.622 11:22:46 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:08.622 11:22:46 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:04:08.622 11:22:46 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:04:08.622 11:22:46 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:08.622 11:22:46 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:04:08.622 11:22:46 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:04:08.622 11:22:46 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:08.622 11:22:46 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:08.880 11:22:46 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:08.880 11:22:46 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:04:08.880 11:22:46 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:04:08.880 11:22:46 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:08.880 11:22:46 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:08.880 11:22:46 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:09.141 11:22:46 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:09.141 11:22:46 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:09.141 11:22:46 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:09.141 11:22:46 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:09.141 11:22:46 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:09.141 11:22:46 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 00:04:09.141 11:22:46 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:09.141 11:22:46 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:09.141 11:22:46 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:09.141 11:22:46 setup.sh.devices.nvme_mount -- setup/devices.sh@123 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:09.141 11:22:46 setup.sh.devices.nvme_mount -- setup/devices.sh@125 -- # verify 0000:00:11.0 data@nvme0n1 '' '' 00:04:09.141 11:22:46 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:04:09.141 11:22:46 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:04:09.141 11:22:46 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point= 00:04:09.141 11:22:46 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file= 00:04:09.141 11:22:46 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:04:09.141 11:22:46 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:04:09.141 11:22:46 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:04:09.141 11:22:46 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:09.141 11:22:46 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:04:09.141 11:22:46 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:04:09.141 11:22:46 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:09.141 11:22:46 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:09.398 11:22:46 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:09.398 11:22:46 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:04:09.398 11:22:46 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:04:09.398 11:22:46 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:09.398 11:22:46 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:09.398 11:22:46 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:09.656 11:22:46 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:09.656 11:22:46 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:09.656 11:22:46 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:09.656 11:22:46 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:09.656 11:22:47 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:09.656 11:22:47 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:04:09.656 11:22:47 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # return 0 00:04:09.656 11:22:47 setup.sh.devices.nvme_mount -- setup/devices.sh@128 -- # cleanup_nvme 00:04:09.656 11:22:47 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:09.656 11:22:47 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:09.656 11:22:47 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:09.656 11:22:47 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:09.656 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:09.656 00:04:09.656 real 0m3.896s 00:04:09.656 user 0m0.665s 00:04:09.656 sys 0m0.980s 00:04:09.656 11:22:47 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:09.656 11:22:47 setup.sh.devices.nvme_mount -- common/autotest_common.sh@10 -- # set +x 00:04:09.656 ************************************ 00:04:09.656 END TEST nvme_mount 00:04:09.656 ************************************ 00:04:09.656 11:22:47 setup.sh.devices -- common/autotest_common.sh@1142 -- # return 0 00:04:09.656 11:22:47 setup.sh.devices -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:04:09.656 11:22:47 setup.sh.devices -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:09.656 11:22:47 setup.sh.devices -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:09.656 11:22:47 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:04:09.656 ************************************ 00:04:09.656 START TEST dm_mount 00:04:09.656 ************************************ 00:04:09.656 11:22:47 setup.sh.devices.dm_mount -- common/autotest_common.sh@1123 -- # dm_mount 00:04:09.656 11:22:47 setup.sh.devices.dm_mount -- setup/devices.sh@144 -- # pv=nvme0n1 00:04:09.656 11:22:47 setup.sh.devices.dm_mount -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:04:09.656 11:22:47 setup.sh.devices.dm_mount -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:04:09.656 11:22:47 setup.sh.devices.dm_mount -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:04:09.656 11:22:47 setup.sh.devices.dm_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:04:09.656 11:22:47 setup.sh.devices.dm_mount -- setup/common.sh@40 -- # local part_no=2 00:04:09.656 11:22:47 setup.sh.devices.dm_mount -- setup/common.sh@41 -- # local size=1073741824 00:04:09.656 11:22:47 setup.sh.devices.dm_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:04:09.656 11:22:47 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # parts=() 00:04:09.656 11:22:47 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # local parts 00:04:09.656 11:22:47 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:04:09.656 11:22:47 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:09.656 11:22:47 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:09.656 11:22:47 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:04:09.656 11:22:47 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:09.656 11:22:47 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:09.656 11:22:47 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:04:09.656 11:22:47 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:09.656 11:22:47 setup.sh.devices.dm_mount -- setup/common.sh@51 -- # (( size /= 4096 )) 00:04:09.656 11:22:47 setup.sh.devices.dm_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:04:09.656 11:22:47 setup.sh.devices.dm_mount -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:04:11.052 Creating new GPT entries in memory. 00:04:11.052 GPT data structures destroyed! You may now partition the disk using fdisk or 00:04:11.052 other utilities. 00:04:11.052 11:22:48 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:04:11.052 11:22:48 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:11.052 11:22:48 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:11.052 11:22:48 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:11.052 11:22:48 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:264191 00:04:11.986 Creating new GPT entries in memory. 00:04:11.986 The operation has completed successfully. 00:04:11.987 11:22:49 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:04:11.987 11:22:49 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:11.987 11:22:49 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:11.987 11:22:49 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:11.987 11:22:49 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:264192:526335 00:04:12.919 The operation has completed successfully. 00:04:12.919 11:22:50 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:04:12.919 11:22:50 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:12.919 11:22:50 setup.sh.devices.dm_mount -- setup/common.sh@62 -- # wait 59407 00:04:12.919 11:22:50 setup.sh.devices.dm_mount -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:04:12.919 11:22:50 setup.sh.devices.dm_mount -- setup/devices.sh@151 -- # dm_mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:12.919 11:22:50 setup.sh.devices.dm_mount -- setup/devices.sh@152 -- # dm_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:04:12.919 11:22:50 setup.sh.devices.dm_mount -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:04:12.919 11:22:50 setup.sh.devices.dm_mount -- setup/devices.sh@160 -- # for t in {1..5} 00:04:12.919 11:22:50 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:12.919 11:22:50 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # break 00:04:12.919 11:22:50 setup.sh.devices.dm_mount -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:12.919 11:22:50 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:04:12.919 11:22:50 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:04:12.919 11:22:50 setup.sh.devices.dm_mount -- setup/devices.sh@166 -- # dm=dm-0 00:04:12.919 11:22:50 setup.sh.devices.dm_mount -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:04:12.919 11:22:50 setup.sh.devices.dm_mount -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:04:12.919 11:22:50 setup.sh.devices.dm_mount -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:12.919 11:22:50 setup.sh.devices.dm_mount -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount size= 00:04:12.919 11:22:50 setup.sh.devices.dm_mount -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:12.919 11:22:50 setup.sh.devices.dm_mount -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:12.919 11:22:50 setup.sh.devices.dm_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:04:12.919 11:22:50 setup.sh.devices.dm_mount -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:12.919 11:22:50 setup.sh.devices.dm_mount -- setup/devices.sh@174 -- # verify 0000:00:11.0 nvme0n1:nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:04:12.919 11:22:50 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:04:12.919 11:22:50 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:04:12.919 11:22:50 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:12.919 11:22:50 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:04:12.919 11:22:50 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:04:12.919 11:22:50 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 00:04:12.919 11:22:50 setup.sh.devices.dm_mount -- setup/devices.sh@56 -- # : 00:04:12.919 11:22:50 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:04:12.919 11:22:50 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:12.919 11:22:50 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:04:12.919 11:22:50 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:04:12.919 11:22:50 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:12.919 11:22:50 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:13.176 11:22:50 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:13.176 11:22:50 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:04:13.176 11:22:50 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:04:13.176 11:22:50 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:13.176 11:22:50 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:13.176 11:22:50 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:13.176 11:22:50 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:13.176 11:22:50 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:13.433 11:22:50 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:13.433 11:22:50 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:13.433 11:22:50 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:13.433 11:22:50 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount ]] 00:04:13.433 11:22:50 setup.sh.devices.dm_mount -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:13.433 11:22:50 setup.sh.devices.dm_mount -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 00:04:13.433 11:22:50 setup.sh.devices.dm_mount -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:04:13.433 11:22:50 setup.sh.devices.dm_mount -- setup/devices.sh@182 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:13.433 11:22:50 setup.sh.devices.dm_mount -- setup/devices.sh@184 -- # verify 0000:00:11.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:04:13.433 11:22:50 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:04:13.433 11:22:50 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:04:13.433 11:22:50 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point= 00:04:13.433 11:22:50 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file= 00:04:13.433 11:22:50 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:04:13.433 11:22:50 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:04:13.433 11:22:50 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:04:13.433 11:22:50 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:13.433 11:22:50 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:04:13.433 11:22:50 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:04:13.433 11:22:50 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:13.433 11:22:50 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:13.690 11:22:50 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:13.690 11:22:50 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:04:13.690 11:22:50 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:04:13.690 11:22:50 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:13.690 11:22:50 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:13.690 11:22:50 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:13.690 11:22:51 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:13.690 11:22:51 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:13.690 11:22:51 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:13.690 11:22:51 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:13.947 11:22:51 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:13.947 11:22:51 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:04:13.947 11:22:51 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # return 0 00:04:13.947 11:22:51 setup.sh.devices.dm_mount -- setup/devices.sh@187 -- # cleanup_dm 00:04:13.947 11:22:51 setup.sh.devices.dm_mount -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:13.947 11:22:51 setup.sh.devices.dm_mount -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:04:13.947 11:22:51 setup.sh.devices.dm_mount -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:04:13.947 11:22:51 setup.sh.devices.dm_mount -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:13.947 11:22:51 setup.sh.devices.dm_mount -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:04:13.947 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:13.947 11:22:51 setup.sh.devices.dm_mount -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:04:13.947 11:22:51 setup.sh.devices.dm_mount -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:04:13.947 00:04:13.947 real 0m4.161s 00:04:13.947 user 0m0.412s 00:04:13.947 sys 0m0.705s 00:04:13.947 11:22:51 setup.sh.devices.dm_mount -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:13.947 ************************************ 00:04:13.947 11:22:51 setup.sh.devices.dm_mount -- common/autotest_common.sh@10 -- # set +x 00:04:13.947 END TEST dm_mount 00:04:13.947 ************************************ 00:04:13.947 11:22:51 setup.sh.devices -- common/autotest_common.sh@1142 -- # return 0 00:04:13.947 11:22:51 setup.sh.devices -- setup/devices.sh@1 -- # cleanup 00:04:13.947 11:22:51 setup.sh.devices -- setup/devices.sh@11 -- # cleanup_nvme 00:04:13.947 11:22:51 setup.sh.devices -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:13.947 11:22:51 setup.sh.devices -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:13.947 11:22:51 setup.sh.devices -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:04:13.947 11:22:51 setup.sh.devices -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:13.947 11:22:51 setup.sh.devices -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:14.205 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:04:14.205 /dev/nvme0n1: 8 bytes were erased at offset 0xfffff000 (gpt): 45 46 49 20 50 41 52 54 00:04:14.205 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:04:14.205 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:04:14.205 11:22:51 setup.sh.devices -- setup/devices.sh@12 -- # cleanup_dm 00:04:14.205 11:22:51 setup.sh.devices -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:14.205 11:22:51 setup.sh.devices -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:04:14.205 11:22:51 setup.sh.devices -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:14.205 11:22:51 setup.sh.devices -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:04:14.205 11:22:51 setup.sh.devices -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:04:14.205 11:22:51 setup.sh.devices -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:04:14.205 00:04:14.205 real 0m9.588s 00:04:14.205 user 0m1.762s 00:04:14.205 sys 0m2.254s 00:04:14.205 11:22:51 setup.sh.devices -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:14.205 11:22:51 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:04:14.205 ************************************ 00:04:14.205 END TEST devices 00:04:14.205 ************************************ 00:04:14.205 11:22:51 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:04:14.205 ************************************ 00:04:14.205 END TEST setup.sh 00:04:14.205 ************************************ 00:04:14.205 00:04:14.205 real 0m21.079s 00:04:14.205 user 0m6.903s 00:04:14.205 sys 0m8.592s 00:04:14.205 11:22:51 setup.sh -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:14.205 11:22:51 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:14.462 11:22:51 -- common/autotest_common.sh@1142 -- # return 0 00:04:14.462 11:22:51 -- spdk/autotest.sh@128 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:04:15.029 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:15.029 Hugepages 00:04:15.029 node hugesize free / total 00:04:15.029 node0 1048576kB 0 / 0 00:04:15.029 node0 2048kB 2048 / 2048 00:04:15.029 00:04:15.029 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:15.029 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:04:15.029 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme1 nvme1n1 00:04:15.287 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme0 nvme0n1 nvme0n2 nvme0n3 00:04:15.287 11:22:52 -- spdk/autotest.sh@130 -- # uname -s 00:04:15.287 11:22:52 -- spdk/autotest.sh@130 -- # [[ Linux == Linux ]] 00:04:15.287 11:22:52 -- spdk/autotest.sh@132 -- # nvme_namespace_revert 00:04:15.287 11:22:52 -- common/autotest_common.sh@1531 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:15.853 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:15.853 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:04:15.853 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:04:16.111 11:22:53 -- common/autotest_common.sh@1532 -- # sleep 1 00:04:17.068 11:22:54 -- common/autotest_common.sh@1533 -- # bdfs=() 00:04:17.068 11:22:54 -- common/autotest_common.sh@1533 -- # local bdfs 00:04:17.068 11:22:54 -- common/autotest_common.sh@1534 -- # bdfs=($(get_nvme_bdfs)) 00:04:17.068 11:22:54 -- common/autotest_common.sh@1534 -- # get_nvme_bdfs 00:04:17.068 11:22:54 -- common/autotest_common.sh@1513 -- # bdfs=() 00:04:17.068 11:22:54 -- common/autotest_common.sh@1513 -- # local bdfs 00:04:17.068 11:22:54 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:17.068 11:22:54 -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:04:17.068 11:22:54 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:04:17.068 11:22:54 -- common/autotest_common.sh@1515 -- # (( 2 == 0 )) 00:04:17.068 11:22:54 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:04:17.068 11:22:54 -- common/autotest_common.sh@1536 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:17.326 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:17.326 Waiting for block devices as requested 00:04:17.584 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:04:17.584 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:04:17.584 11:22:54 -- common/autotest_common.sh@1538 -- # for bdf in "${bdfs[@]}" 00:04:17.584 11:22:54 -- common/autotest_common.sh@1539 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:04:17.584 11:22:54 -- common/autotest_common.sh@1502 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:04:17.584 11:22:54 -- common/autotest_common.sh@1502 -- # grep 0000:00:10.0/nvme/nvme 00:04:17.584 11:22:54 -- common/autotest_common.sh@1502 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:04:17.584 11:22:54 -- common/autotest_common.sh@1503 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:04:17.584 11:22:54 -- common/autotest_common.sh@1507 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:04:17.584 11:22:54 -- common/autotest_common.sh@1507 -- # printf '%s\n' nvme1 00:04:17.584 11:22:54 -- common/autotest_common.sh@1539 -- # nvme_ctrlr=/dev/nvme1 00:04:17.584 11:22:54 -- common/autotest_common.sh@1540 -- # [[ -z /dev/nvme1 ]] 00:04:17.584 11:22:55 -- common/autotest_common.sh@1545 -- # nvme id-ctrl /dev/nvme1 00:04:17.584 11:22:55 -- common/autotest_common.sh@1545 -- # grep oacs 00:04:17.584 11:22:55 -- common/autotest_common.sh@1545 -- # cut -d: -f2 00:04:17.584 11:22:55 -- common/autotest_common.sh@1545 -- # oacs=' 0x12a' 00:04:17.584 11:22:55 -- common/autotest_common.sh@1546 -- # oacs_ns_manage=8 00:04:17.584 11:22:55 -- common/autotest_common.sh@1548 -- # [[ 8 -ne 0 ]] 00:04:17.584 11:22:55 -- common/autotest_common.sh@1554 -- # nvme id-ctrl /dev/nvme1 00:04:17.584 11:22:55 -- common/autotest_common.sh@1554 -- # grep unvmcap 00:04:17.584 11:22:55 -- common/autotest_common.sh@1554 -- # cut -d: -f2 00:04:17.584 11:22:55 -- common/autotest_common.sh@1554 -- # unvmcap=' 0' 00:04:17.584 11:22:55 -- common/autotest_common.sh@1555 -- # [[ 0 -eq 0 ]] 00:04:17.584 11:22:55 -- common/autotest_common.sh@1557 -- # continue 00:04:17.584 11:22:55 -- common/autotest_common.sh@1538 -- # for bdf in "${bdfs[@]}" 00:04:17.584 11:22:55 -- common/autotest_common.sh@1539 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:04:17.584 11:22:55 -- common/autotest_common.sh@1502 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:04:17.584 11:22:55 -- common/autotest_common.sh@1502 -- # grep 0000:00:11.0/nvme/nvme 00:04:17.584 11:22:55 -- common/autotest_common.sh@1502 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:04:17.584 11:22:55 -- common/autotest_common.sh@1503 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:04:17.584 11:22:55 -- common/autotest_common.sh@1507 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:04:17.584 11:22:55 -- common/autotest_common.sh@1507 -- # printf '%s\n' nvme0 00:04:17.584 11:22:55 -- common/autotest_common.sh@1539 -- # nvme_ctrlr=/dev/nvme0 00:04:17.584 11:22:55 -- common/autotest_common.sh@1540 -- # [[ -z /dev/nvme0 ]] 00:04:17.584 11:22:55 -- common/autotest_common.sh@1545 -- # nvme id-ctrl /dev/nvme0 00:04:17.584 11:22:55 -- common/autotest_common.sh@1545 -- # cut -d: -f2 00:04:17.584 11:22:55 -- common/autotest_common.sh@1545 -- # grep oacs 00:04:17.584 11:22:55 -- common/autotest_common.sh@1545 -- # oacs=' 0x12a' 00:04:17.584 11:22:55 -- common/autotest_common.sh@1546 -- # oacs_ns_manage=8 00:04:17.584 11:22:55 -- common/autotest_common.sh@1548 -- # [[ 8 -ne 0 ]] 00:04:17.584 11:22:55 -- common/autotest_common.sh@1554 -- # nvme id-ctrl /dev/nvme0 00:04:17.584 11:22:55 -- common/autotest_common.sh@1554 -- # grep unvmcap 00:04:17.584 11:22:55 -- common/autotest_common.sh@1554 -- # cut -d: -f2 00:04:17.584 11:22:55 -- common/autotest_common.sh@1554 -- # unvmcap=' 0' 00:04:17.584 11:22:55 -- common/autotest_common.sh@1555 -- # [[ 0 -eq 0 ]] 00:04:17.584 11:22:55 -- common/autotest_common.sh@1557 -- # continue 00:04:17.584 11:22:55 -- spdk/autotest.sh@135 -- # timing_exit pre_cleanup 00:04:17.584 11:22:55 -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:17.584 11:22:55 -- common/autotest_common.sh@10 -- # set +x 00:04:17.842 11:22:55 -- spdk/autotest.sh@138 -- # timing_enter afterboot 00:04:17.842 11:22:55 -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:17.842 11:22:55 -- common/autotest_common.sh@10 -- # set +x 00:04:17.842 11:22:55 -- spdk/autotest.sh@139 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:18.410 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:18.410 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:04:18.410 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:04:18.668 11:22:55 -- spdk/autotest.sh@140 -- # timing_exit afterboot 00:04:18.668 11:22:55 -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:18.668 11:22:55 -- common/autotest_common.sh@10 -- # set +x 00:04:18.668 11:22:55 -- spdk/autotest.sh@144 -- # opal_revert_cleanup 00:04:18.668 11:22:55 -- common/autotest_common.sh@1591 -- # mapfile -t bdfs 00:04:18.668 11:22:55 -- common/autotest_common.sh@1591 -- # get_nvme_bdfs_by_id 0x0a54 00:04:18.668 11:22:55 -- common/autotest_common.sh@1577 -- # bdfs=() 00:04:18.668 11:22:55 -- common/autotest_common.sh@1577 -- # local bdfs 00:04:18.668 11:22:55 -- common/autotest_common.sh@1579 -- # get_nvme_bdfs 00:04:18.668 11:22:55 -- common/autotest_common.sh@1513 -- # bdfs=() 00:04:18.668 11:22:55 -- common/autotest_common.sh@1513 -- # local bdfs 00:04:18.668 11:22:55 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:18.668 11:22:55 -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:04:18.668 11:22:55 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:04:18.668 11:22:56 -- common/autotest_common.sh@1515 -- # (( 2 == 0 )) 00:04:18.668 11:22:56 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:04:18.668 11:22:56 -- common/autotest_common.sh@1579 -- # for bdf in $(get_nvme_bdfs) 00:04:18.668 11:22:56 -- common/autotest_common.sh@1580 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:04:18.668 11:22:56 -- common/autotest_common.sh@1580 -- # device=0x0010 00:04:18.668 11:22:56 -- common/autotest_common.sh@1581 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:18.668 11:22:56 -- common/autotest_common.sh@1579 -- # for bdf in $(get_nvme_bdfs) 00:04:18.668 11:22:56 -- common/autotest_common.sh@1580 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:04:18.668 11:22:56 -- common/autotest_common.sh@1580 -- # device=0x0010 00:04:18.668 11:22:56 -- common/autotest_common.sh@1581 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:18.668 11:22:56 -- common/autotest_common.sh@1586 -- # printf '%s\n' 00:04:18.668 11:22:56 -- common/autotest_common.sh@1592 -- # [[ -z '' ]] 00:04:18.668 11:22:56 -- common/autotest_common.sh@1593 -- # return 0 00:04:18.668 11:22:56 -- spdk/autotest.sh@150 -- # '[' 0 -eq 1 ']' 00:04:18.668 11:22:56 -- spdk/autotest.sh@154 -- # '[' 1 -eq 1 ']' 00:04:18.668 11:22:56 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:04:18.668 11:22:56 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:04:18.668 11:22:56 -- spdk/autotest.sh@162 -- # timing_enter lib 00:04:18.668 11:22:56 -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:18.668 11:22:56 -- common/autotest_common.sh@10 -- # set +x 00:04:18.668 11:22:56 -- spdk/autotest.sh@164 -- # [[ 0 -eq 1 ]] 00:04:18.668 11:22:56 -- spdk/autotest.sh@168 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:04:18.668 11:22:56 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:18.668 11:22:56 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:18.668 11:22:56 -- common/autotest_common.sh@10 -- # set +x 00:04:18.668 ************************************ 00:04:18.668 START TEST env 00:04:18.668 ************************************ 00:04:18.668 11:22:56 env -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:04:18.668 * Looking for test storage... 00:04:18.926 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:04:18.926 11:22:56 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:04:18.926 11:22:56 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:18.926 11:22:56 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:18.926 11:22:56 env -- common/autotest_common.sh@10 -- # set +x 00:04:18.926 ************************************ 00:04:18.926 START TEST env_memory 00:04:18.926 ************************************ 00:04:18.926 11:22:56 env.env_memory -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:04:18.926 00:04:18.926 00:04:18.926 CUnit - A unit testing framework for C - Version 2.1-3 00:04:18.926 http://cunit.sourceforge.net/ 00:04:18.926 00:04:18.926 00:04:18.926 Suite: memory 00:04:18.926 Test: alloc and free memory map ...[2024-07-15 11:22:56.206685] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:04:18.926 passed 00:04:18.926 Test: mem map translation ...[2024-07-15 11:22:56.237249] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:04:18.926 [2024-07-15 11:22:56.237300] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:04:18.926 [2024-07-15 11:22:56.237357] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:04:18.926 [2024-07-15 11:22:56.237368] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:04:18.926 passed 00:04:18.926 Test: mem map registration ...[2024-07-15 11:22:56.302009] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:04:18.926 [2024-07-15 11:22:56.302062] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:04:18.926 passed 00:04:18.926 Test: mem map adjacent registrations ...passed 00:04:18.926 00:04:18.926 Run Summary: Type Total Ran Passed Failed Inactive 00:04:18.926 suites 1 1 n/a 0 0 00:04:18.926 tests 4 4 4 0 0 00:04:18.926 asserts 152 152 152 0 n/a 00:04:18.926 00:04:18.926 Elapsed time = 0.213 seconds 00:04:18.926 00:04:18.926 real 0m0.230s 00:04:18.926 user 0m0.214s 00:04:18.926 sys 0m0.013s 00:04:18.926 11:22:56 env.env_memory -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:18.926 11:22:56 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:04:18.926 ************************************ 00:04:18.926 END TEST env_memory 00:04:18.926 ************************************ 00:04:19.186 11:22:56 env -- common/autotest_common.sh@1142 -- # return 0 00:04:19.186 11:22:56 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:04:19.186 11:22:56 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:19.186 11:22:56 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:19.186 11:22:56 env -- common/autotest_common.sh@10 -- # set +x 00:04:19.186 ************************************ 00:04:19.186 START TEST env_vtophys 00:04:19.186 ************************************ 00:04:19.186 11:22:56 env.env_vtophys -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:04:19.186 EAL: lib.eal log level changed from notice to debug 00:04:19.186 EAL: Detected lcore 0 as core 0 on socket 0 00:04:19.186 EAL: Detected lcore 1 as core 0 on socket 0 00:04:19.186 EAL: Detected lcore 2 as core 0 on socket 0 00:04:19.186 EAL: Detected lcore 3 as core 0 on socket 0 00:04:19.186 EAL: Detected lcore 4 as core 0 on socket 0 00:04:19.186 EAL: Detected lcore 5 as core 0 on socket 0 00:04:19.186 EAL: Detected lcore 6 as core 0 on socket 0 00:04:19.186 EAL: Detected lcore 7 as core 0 on socket 0 00:04:19.186 EAL: Detected lcore 8 as core 0 on socket 0 00:04:19.186 EAL: Detected lcore 9 as core 0 on socket 0 00:04:19.186 EAL: Maximum logical cores by configuration: 128 00:04:19.186 EAL: Detected CPU lcores: 10 00:04:19.186 EAL: Detected NUMA nodes: 1 00:04:19.186 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:04:19.186 EAL: Detected shared linkage of DPDK 00:04:19.186 EAL: No shared files mode enabled, IPC will be disabled 00:04:19.186 EAL: Selected IOVA mode 'PA' 00:04:19.186 EAL: Probing VFIO support... 00:04:19.186 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:04:19.186 EAL: VFIO modules not loaded, skipping VFIO support... 00:04:19.186 EAL: Ask a virtual area of 0x2e000 bytes 00:04:19.186 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:04:19.186 EAL: Setting up physically contiguous memory... 00:04:19.186 EAL: Setting maximum number of open files to 524288 00:04:19.186 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:04:19.186 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:04:19.186 EAL: Ask a virtual area of 0x61000 bytes 00:04:19.186 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:04:19.186 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:19.186 EAL: Ask a virtual area of 0x400000000 bytes 00:04:19.186 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:04:19.186 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:04:19.186 EAL: Ask a virtual area of 0x61000 bytes 00:04:19.186 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:04:19.186 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:19.186 EAL: Ask a virtual area of 0x400000000 bytes 00:04:19.186 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:04:19.186 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:04:19.186 EAL: Ask a virtual area of 0x61000 bytes 00:04:19.186 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:04:19.186 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:19.186 EAL: Ask a virtual area of 0x400000000 bytes 00:04:19.186 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:04:19.186 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:04:19.186 EAL: Ask a virtual area of 0x61000 bytes 00:04:19.186 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:04:19.186 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:19.186 EAL: Ask a virtual area of 0x400000000 bytes 00:04:19.186 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:04:19.186 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:04:19.186 EAL: Hugepages will be freed exactly as allocated. 00:04:19.186 EAL: No shared files mode enabled, IPC is disabled 00:04:19.186 EAL: No shared files mode enabled, IPC is disabled 00:04:19.186 EAL: TSC frequency is ~2200000 KHz 00:04:19.186 EAL: Main lcore 0 is ready (tid=7f422107da00;cpuset=[0]) 00:04:19.186 EAL: Trying to obtain current memory policy. 00:04:19.186 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:19.186 EAL: Restoring previous memory policy: 0 00:04:19.186 EAL: request: mp_malloc_sync 00:04:19.186 EAL: No shared files mode enabled, IPC is disabled 00:04:19.186 EAL: Heap on socket 0 was expanded by 2MB 00:04:19.186 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:04:19.186 EAL: No PCI address specified using 'addr=' in: bus=pci 00:04:19.186 EAL: Mem event callback 'spdk:(nil)' registered 00:04:19.186 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:04:19.186 00:04:19.186 00:04:19.186 CUnit - A unit testing framework for C - Version 2.1-3 00:04:19.186 http://cunit.sourceforge.net/ 00:04:19.186 00:04:19.186 00:04:19.186 Suite: components_suite 00:04:19.186 Test: vtophys_malloc_test ...passed 00:04:19.186 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:04:19.186 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:19.186 EAL: Restoring previous memory policy: 4 00:04:19.186 EAL: Calling mem event callback 'spdk:(nil)' 00:04:19.186 EAL: request: mp_malloc_sync 00:04:19.186 EAL: No shared files mode enabled, IPC is disabled 00:04:19.186 EAL: Heap on socket 0 was expanded by 4MB 00:04:19.186 EAL: Calling mem event callback 'spdk:(nil)' 00:04:19.186 EAL: request: mp_malloc_sync 00:04:19.186 EAL: No shared files mode enabled, IPC is disabled 00:04:19.186 EAL: Heap on socket 0 was shrunk by 4MB 00:04:19.186 EAL: Trying to obtain current memory policy. 00:04:19.186 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:19.186 EAL: Restoring previous memory policy: 4 00:04:19.186 EAL: Calling mem event callback 'spdk:(nil)' 00:04:19.186 EAL: request: mp_malloc_sync 00:04:19.186 EAL: No shared files mode enabled, IPC is disabled 00:04:19.186 EAL: Heap on socket 0 was expanded by 6MB 00:04:19.186 EAL: Calling mem event callback 'spdk:(nil)' 00:04:19.186 EAL: request: mp_malloc_sync 00:04:19.186 EAL: No shared files mode enabled, IPC is disabled 00:04:19.186 EAL: Heap on socket 0 was shrunk by 6MB 00:04:19.186 EAL: Trying to obtain current memory policy. 00:04:19.186 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:19.186 EAL: Restoring previous memory policy: 4 00:04:19.186 EAL: Calling mem event callback 'spdk:(nil)' 00:04:19.186 EAL: request: mp_malloc_sync 00:04:19.186 EAL: No shared files mode enabled, IPC is disabled 00:04:19.186 EAL: Heap on socket 0 was expanded by 10MB 00:04:19.186 EAL: Calling mem event callback 'spdk:(nil)' 00:04:19.186 EAL: request: mp_malloc_sync 00:04:19.186 EAL: No shared files mode enabled, IPC is disabled 00:04:19.186 EAL: Heap on socket 0 was shrunk by 10MB 00:04:19.186 EAL: Trying to obtain current memory policy. 00:04:19.186 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:19.186 EAL: Restoring previous memory policy: 4 00:04:19.186 EAL: Calling mem event callback 'spdk:(nil)' 00:04:19.186 EAL: request: mp_malloc_sync 00:04:19.186 EAL: No shared files mode enabled, IPC is disabled 00:04:19.186 EAL: Heap on socket 0 was expanded by 18MB 00:04:19.186 EAL: Calling mem event callback 'spdk:(nil)' 00:04:19.186 EAL: request: mp_malloc_sync 00:04:19.186 EAL: No shared files mode enabled, IPC is disabled 00:04:19.186 EAL: Heap on socket 0 was shrunk by 18MB 00:04:19.186 EAL: Trying to obtain current memory policy. 00:04:19.187 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:19.187 EAL: Restoring previous memory policy: 4 00:04:19.187 EAL: Calling mem event callback 'spdk:(nil)' 00:04:19.187 EAL: request: mp_malloc_sync 00:04:19.187 EAL: No shared files mode enabled, IPC is disabled 00:04:19.187 EAL: Heap on socket 0 was expanded by 34MB 00:04:19.187 EAL: Calling mem event callback 'spdk:(nil)' 00:04:19.187 EAL: request: mp_malloc_sync 00:04:19.187 EAL: No shared files mode enabled, IPC is disabled 00:04:19.187 EAL: Heap on socket 0 was shrunk by 34MB 00:04:19.187 EAL: Trying to obtain current memory policy. 00:04:19.187 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:19.187 EAL: Restoring previous memory policy: 4 00:04:19.187 EAL: Calling mem event callback 'spdk:(nil)' 00:04:19.187 EAL: request: mp_malloc_sync 00:04:19.187 EAL: No shared files mode enabled, IPC is disabled 00:04:19.187 EAL: Heap on socket 0 was expanded by 66MB 00:04:19.187 EAL: Calling mem event callback 'spdk:(nil)' 00:04:19.187 EAL: request: mp_malloc_sync 00:04:19.187 EAL: No shared files mode enabled, IPC is disabled 00:04:19.187 EAL: Heap on socket 0 was shrunk by 66MB 00:04:19.187 EAL: Trying to obtain current memory policy. 00:04:19.187 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:19.187 EAL: Restoring previous memory policy: 4 00:04:19.187 EAL: Calling mem event callback 'spdk:(nil)' 00:04:19.187 EAL: request: mp_malloc_sync 00:04:19.187 EAL: No shared files mode enabled, IPC is disabled 00:04:19.187 EAL: Heap on socket 0 was expanded by 130MB 00:04:19.446 EAL: Calling mem event callback 'spdk:(nil)' 00:04:19.446 EAL: request: mp_malloc_sync 00:04:19.446 EAL: No shared files mode enabled, IPC is disabled 00:04:19.446 EAL: Heap on socket 0 was shrunk by 130MB 00:04:19.446 EAL: Trying to obtain current memory policy. 00:04:19.446 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:19.446 EAL: Restoring previous memory policy: 4 00:04:19.446 EAL: Calling mem event callback 'spdk:(nil)' 00:04:19.446 EAL: request: mp_malloc_sync 00:04:19.446 EAL: No shared files mode enabled, IPC is disabled 00:04:19.446 EAL: Heap on socket 0 was expanded by 258MB 00:04:19.446 EAL: Calling mem event callback 'spdk:(nil)' 00:04:19.446 EAL: request: mp_malloc_sync 00:04:19.446 EAL: No shared files mode enabled, IPC is disabled 00:04:19.446 EAL: Heap on socket 0 was shrunk by 258MB 00:04:19.446 EAL: Trying to obtain current memory policy. 00:04:19.446 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:19.446 EAL: Restoring previous memory policy: 4 00:04:19.446 EAL: Calling mem event callback 'spdk:(nil)' 00:04:19.446 EAL: request: mp_malloc_sync 00:04:19.446 EAL: No shared files mode enabled, IPC is disabled 00:04:19.446 EAL: Heap on socket 0 was expanded by 514MB 00:04:19.446 EAL: Calling mem event callback 'spdk:(nil)' 00:04:19.704 EAL: request: mp_malloc_sync 00:04:19.704 EAL: No shared files mode enabled, IPC is disabled 00:04:19.704 EAL: Heap on socket 0 was shrunk by 514MB 00:04:19.704 EAL: Trying to obtain current memory policy. 00:04:19.704 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:19.704 EAL: Restoring previous memory policy: 4 00:04:19.704 EAL: Calling mem event callback 'spdk:(nil)' 00:04:19.704 EAL: request: mp_malloc_sync 00:04:19.704 EAL: No shared files mode enabled, IPC is disabled 00:04:19.704 EAL: Heap on socket 0 was expanded by 1026MB 00:04:19.962 EAL: Calling mem event callback 'spdk:(nil)' 00:04:19.962 passed 00:04:19.962 00:04:19.962 Run Summary: Type Total Ran Passed Failed Inactive 00:04:19.962 suites 1 1 n/a 0 0 00:04:19.962 tests 2 2 2 0 0 00:04:19.962 asserts 5239 5239 5239 0 n/a 00:04:19.962 00:04:19.962 Elapsed time = 0.701 secondsEAL: request: mp_malloc_sync 00:04:19.962 EAL: No shared files mode enabled, IPC is disabled 00:04:19.962 EAL: Heap on socket 0 was shrunk by 1026MB 00:04:19.962 00:04:19.962 EAL: Calling mem event callback 'spdk:(nil)' 00:04:19.962 EAL: request: mp_malloc_sync 00:04:19.962 EAL: No shared files mode enabled, IPC is disabled 00:04:19.962 EAL: Heap on socket 0 was shrunk by 2MB 00:04:19.962 EAL: No shared files mode enabled, IPC is disabled 00:04:19.962 EAL: No shared files mode enabled, IPC is disabled 00:04:19.962 EAL: No shared files mode enabled, IPC is disabled 00:04:19.962 00:04:19.962 real 0m0.900s 00:04:19.962 user 0m0.459s 00:04:19.962 sys 0m0.306s 00:04:19.962 11:22:57 env.env_vtophys -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:19.962 11:22:57 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:04:19.962 ************************************ 00:04:19.962 END TEST env_vtophys 00:04:19.962 ************************************ 00:04:19.962 11:22:57 env -- common/autotest_common.sh@1142 -- # return 0 00:04:19.962 11:22:57 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:04:19.962 11:22:57 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:19.962 11:22:57 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:19.962 11:22:57 env -- common/autotest_common.sh@10 -- # set +x 00:04:19.962 ************************************ 00:04:19.962 START TEST env_pci 00:04:19.962 ************************************ 00:04:19.962 11:22:57 env.env_pci -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:04:19.962 00:04:19.962 00:04:19.962 CUnit - A unit testing framework for C - Version 2.1-3 00:04:19.962 http://cunit.sourceforge.net/ 00:04:19.962 00:04:19.962 00:04:19.962 Suite: pci 00:04:19.962 Test: pci_hook ...[2024-07-15 11:22:57.395432] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 60590 has claimed it 00:04:19.962 passed 00:04:19.962 00:04:19.962 Run Summary: Type Total Ran Passed Failed Inactive 00:04:19.962 suites 1 1 n/a 0 0 00:04:19.962 tests 1 1 1 0 0 00:04:19.962 asserts 25 25 25 0 n/a 00:04:19.962 00:04:19.962 Elapsed time = 0.002 seconds 00:04:19.962 EAL: Cannot find device (10000:00:01.0) 00:04:19.962 EAL: Failed to attach device on primary process 00:04:19.962 00:04:19.962 real 0m0.022s 00:04:19.962 user 0m0.012s 00:04:19.962 sys 0m0.009s 00:04:19.962 11:22:57 env.env_pci -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:19.962 11:22:57 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:04:19.962 ************************************ 00:04:19.962 END TEST env_pci 00:04:19.962 ************************************ 00:04:20.221 11:22:57 env -- common/autotest_common.sh@1142 -- # return 0 00:04:20.221 11:22:57 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:04:20.221 11:22:57 env -- env/env.sh@15 -- # uname 00:04:20.221 11:22:57 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:04:20.221 11:22:57 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:04:20.221 11:22:57 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:20.221 11:22:57 env -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:04:20.221 11:22:57 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:20.221 11:22:57 env -- common/autotest_common.sh@10 -- # set +x 00:04:20.221 ************************************ 00:04:20.221 START TEST env_dpdk_post_init 00:04:20.221 ************************************ 00:04:20.221 11:22:57 env.env_dpdk_post_init -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:20.221 EAL: Detected CPU lcores: 10 00:04:20.221 EAL: Detected NUMA nodes: 1 00:04:20.221 EAL: Detected shared linkage of DPDK 00:04:20.221 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:20.221 EAL: Selected IOVA mode 'PA' 00:04:20.221 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:20.221 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:04:20.221 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 00:04:20.221 Starting DPDK initialization... 00:04:20.221 Starting SPDK post initialization... 00:04:20.221 SPDK NVMe probe 00:04:20.221 Attaching to 0000:00:10.0 00:04:20.221 Attaching to 0000:00:11.0 00:04:20.221 Attached to 0000:00:10.0 00:04:20.221 Attached to 0000:00:11.0 00:04:20.221 Cleaning up... 00:04:20.221 00:04:20.221 real 0m0.183s 00:04:20.221 user 0m0.045s 00:04:20.221 sys 0m0.038s 00:04:20.221 11:22:57 env.env_dpdk_post_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:20.221 11:22:57 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:04:20.221 ************************************ 00:04:20.221 END TEST env_dpdk_post_init 00:04:20.221 ************************************ 00:04:20.221 11:22:57 env -- common/autotest_common.sh@1142 -- # return 0 00:04:20.221 11:22:57 env -- env/env.sh@26 -- # uname 00:04:20.221 11:22:57 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:04:20.221 11:22:57 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:04:20.221 11:22:57 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:20.221 11:22:57 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:20.221 11:22:57 env -- common/autotest_common.sh@10 -- # set +x 00:04:20.221 ************************************ 00:04:20.221 START TEST env_mem_callbacks 00:04:20.221 ************************************ 00:04:20.221 11:22:57 env.env_mem_callbacks -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:04:20.480 EAL: Detected CPU lcores: 10 00:04:20.480 EAL: Detected NUMA nodes: 1 00:04:20.480 EAL: Detected shared linkage of DPDK 00:04:20.480 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:20.480 EAL: Selected IOVA mode 'PA' 00:04:20.480 00:04:20.480 00:04:20.480 CUnit - A unit testing framework for C - Version 2.1-3 00:04:20.480 http://cunit.sourceforge.net/ 00:04:20.480 00:04:20.480 00:04:20.480 Suite: memory 00:04:20.480 Test: test ... 00:04:20.480 register 0x200000200000 2097152 00:04:20.480 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:20.480 malloc 3145728 00:04:20.480 register 0x200000400000 4194304 00:04:20.480 buf 0x200000500000 len 3145728 PASSED 00:04:20.480 malloc 64 00:04:20.480 buf 0x2000004fff40 len 64 PASSED 00:04:20.480 malloc 4194304 00:04:20.480 register 0x200000800000 6291456 00:04:20.480 buf 0x200000a00000 len 4194304 PASSED 00:04:20.480 free 0x200000500000 3145728 00:04:20.480 free 0x2000004fff40 64 00:04:20.480 unregister 0x200000400000 4194304 PASSED 00:04:20.480 free 0x200000a00000 4194304 00:04:20.480 unregister 0x200000800000 6291456 PASSED 00:04:20.480 malloc 8388608 00:04:20.480 register 0x200000400000 10485760 00:04:20.480 buf 0x200000600000 len 8388608 PASSED 00:04:20.480 free 0x200000600000 8388608 00:04:20.480 unregister 0x200000400000 10485760 PASSED 00:04:20.480 passed 00:04:20.480 00:04:20.480 Run Summary: Type Total Ran Passed Failed Inactive 00:04:20.480 suites 1 1 n/a 0 0 00:04:20.480 tests 1 1 1 0 0 00:04:20.480 asserts 15 15 15 0 n/a 00:04:20.480 00:04:20.480 Elapsed time = 0.006 seconds 00:04:20.480 00:04:20.480 real 0m0.134s 00:04:20.481 user 0m0.016s 00:04:20.481 sys 0m0.018s 00:04:20.481 11:22:57 env.env_mem_callbacks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:20.481 11:22:57 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:04:20.481 ************************************ 00:04:20.481 END TEST env_mem_callbacks 00:04:20.481 ************************************ 00:04:20.481 11:22:57 env -- common/autotest_common.sh@1142 -- # return 0 00:04:20.481 ************************************ 00:04:20.481 END TEST env 00:04:20.481 ************************************ 00:04:20.481 00:04:20.481 real 0m1.799s 00:04:20.481 user 0m0.868s 00:04:20.481 sys 0m0.582s 00:04:20.481 11:22:57 env -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:20.481 11:22:57 env -- common/autotest_common.sh@10 -- # set +x 00:04:20.481 11:22:57 -- common/autotest_common.sh@1142 -- # return 0 00:04:20.481 11:22:57 -- spdk/autotest.sh@169 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:04:20.481 11:22:57 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:20.481 11:22:57 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:20.481 11:22:57 -- common/autotest_common.sh@10 -- # set +x 00:04:20.481 ************************************ 00:04:20.481 START TEST rpc 00:04:20.481 ************************************ 00:04:20.481 11:22:57 rpc -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:04:20.741 * Looking for test storage... 00:04:20.741 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:04:20.741 11:22:57 rpc -- rpc/rpc.sh@65 -- # spdk_pid=60700 00:04:20.741 11:22:57 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:20.741 11:22:57 rpc -- rpc/rpc.sh@67 -- # waitforlisten 60700 00:04:20.741 11:22:57 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:04:20.741 11:22:57 rpc -- common/autotest_common.sh@829 -- # '[' -z 60700 ']' 00:04:20.741 11:22:57 rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:20.741 11:22:57 rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:20.741 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:20.741 11:22:57 rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:20.741 11:22:57 rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:20.741 11:22:57 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:20.741 [2024-07-15 11:22:58.061288] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:04:20.741 [2024-07-15 11:22:58.061393] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60700 ] 00:04:20.741 [2024-07-15 11:22:58.199870] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:21.001 [2024-07-15 11:22:58.270824] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:04:21.001 [2024-07-15 11:22:58.270887] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 60700' to capture a snapshot of events at runtime. 00:04:21.001 [2024-07-15 11:22:58.270901] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:04:21.001 [2024-07-15 11:22:58.270911] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:04:21.001 [2024-07-15 11:22:58.270920] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid60700 for offline analysis/debug. 00:04:21.001 [2024-07-15 11:22:58.270955] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:21.938 11:22:59 rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:21.938 11:22:59 rpc -- common/autotest_common.sh@862 -- # return 0 00:04:21.938 11:22:59 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:04:21.938 11:22:59 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:04:21.938 11:22:59 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:04:21.938 11:22:59 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:04:21.938 11:22:59 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:21.938 11:22:59 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:21.938 11:22:59 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:21.938 ************************************ 00:04:21.938 START TEST rpc_integrity 00:04:21.938 ************************************ 00:04:21.938 11:22:59 rpc.rpc_integrity -- common/autotest_common.sh@1123 -- # rpc_integrity 00:04:21.938 11:22:59 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:21.938 11:22:59 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:21.938 11:22:59 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:21.938 11:22:59 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:21.938 11:22:59 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:21.938 11:22:59 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:21.938 11:22:59 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:21.938 11:22:59 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:21.938 11:22:59 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:21.938 11:22:59 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:21.938 11:22:59 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:21.938 11:22:59 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:04:21.938 11:22:59 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:21.938 11:22:59 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:21.938 11:22:59 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:21.938 11:22:59 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:21.938 11:22:59 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:21.938 { 00:04:21.938 "aliases": [ 00:04:21.938 "24eefa88-1c49-4643-8abd-a5a6aea07ec8" 00:04:21.938 ], 00:04:21.938 "assigned_rate_limits": { 00:04:21.938 "r_mbytes_per_sec": 0, 00:04:21.938 "rw_ios_per_sec": 0, 00:04:21.938 "rw_mbytes_per_sec": 0, 00:04:21.938 "w_mbytes_per_sec": 0 00:04:21.938 }, 00:04:21.938 "block_size": 512, 00:04:21.938 "claimed": false, 00:04:21.938 "driver_specific": {}, 00:04:21.938 "memory_domains": [ 00:04:21.938 { 00:04:21.938 "dma_device_id": "system", 00:04:21.938 "dma_device_type": 1 00:04:21.938 }, 00:04:21.938 { 00:04:21.938 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:21.938 "dma_device_type": 2 00:04:21.938 } 00:04:21.938 ], 00:04:21.938 "name": "Malloc0", 00:04:21.938 "num_blocks": 16384, 00:04:21.938 "product_name": "Malloc disk", 00:04:21.938 "supported_io_types": { 00:04:21.938 "abort": true, 00:04:21.938 "compare": false, 00:04:21.938 "compare_and_write": false, 00:04:21.938 "copy": true, 00:04:21.938 "flush": true, 00:04:21.938 "get_zone_info": false, 00:04:21.938 "nvme_admin": false, 00:04:21.938 "nvme_io": false, 00:04:21.938 "nvme_io_md": false, 00:04:21.938 "nvme_iov_md": false, 00:04:21.938 "read": true, 00:04:21.938 "reset": true, 00:04:21.938 "seek_data": false, 00:04:21.938 "seek_hole": false, 00:04:21.938 "unmap": true, 00:04:21.938 "write": true, 00:04:21.938 "write_zeroes": true, 00:04:21.938 "zcopy": true, 00:04:21.938 "zone_append": false, 00:04:21.938 "zone_management": false 00:04:21.938 }, 00:04:21.938 "uuid": "24eefa88-1c49-4643-8abd-a5a6aea07ec8", 00:04:21.938 "zoned": false 00:04:21.938 } 00:04:21.938 ]' 00:04:21.938 11:22:59 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:21.938 11:22:59 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:21.938 11:22:59 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:04:21.938 11:22:59 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:21.938 11:22:59 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:21.938 [2024-07-15 11:22:59.231393] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:04:21.938 [2024-07-15 11:22:59.231463] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:21.938 [2024-07-15 11:22:59.231496] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1d60ad0 00:04:21.938 [2024-07-15 11:22:59.231516] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:21.938 [2024-07-15 11:22:59.233109] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:21.938 [2024-07-15 11:22:59.233152] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:21.938 Passthru0 00:04:21.938 11:22:59 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:21.938 11:22:59 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:21.938 11:22:59 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:21.938 11:22:59 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:21.938 11:22:59 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:21.938 11:22:59 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:21.938 { 00:04:21.938 "aliases": [ 00:04:21.938 "24eefa88-1c49-4643-8abd-a5a6aea07ec8" 00:04:21.938 ], 00:04:21.938 "assigned_rate_limits": { 00:04:21.938 "r_mbytes_per_sec": 0, 00:04:21.938 "rw_ios_per_sec": 0, 00:04:21.938 "rw_mbytes_per_sec": 0, 00:04:21.938 "w_mbytes_per_sec": 0 00:04:21.938 }, 00:04:21.938 "block_size": 512, 00:04:21.938 "claim_type": "exclusive_write", 00:04:21.938 "claimed": true, 00:04:21.938 "driver_specific": {}, 00:04:21.938 "memory_domains": [ 00:04:21.938 { 00:04:21.938 "dma_device_id": "system", 00:04:21.938 "dma_device_type": 1 00:04:21.938 }, 00:04:21.938 { 00:04:21.938 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:21.938 "dma_device_type": 2 00:04:21.938 } 00:04:21.938 ], 00:04:21.938 "name": "Malloc0", 00:04:21.938 "num_blocks": 16384, 00:04:21.938 "product_name": "Malloc disk", 00:04:21.938 "supported_io_types": { 00:04:21.938 "abort": true, 00:04:21.938 "compare": false, 00:04:21.938 "compare_and_write": false, 00:04:21.938 "copy": true, 00:04:21.938 "flush": true, 00:04:21.938 "get_zone_info": false, 00:04:21.938 "nvme_admin": false, 00:04:21.938 "nvme_io": false, 00:04:21.938 "nvme_io_md": false, 00:04:21.938 "nvme_iov_md": false, 00:04:21.938 "read": true, 00:04:21.938 "reset": true, 00:04:21.938 "seek_data": false, 00:04:21.938 "seek_hole": false, 00:04:21.938 "unmap": true, 00:04:21.938 "write": true, 00:04:21.938 "write_zeroes": true, 00:04:21.938 "zcopy": true, 00:04:21.938 "zone_append": false, 00:04:21.938 "zone_management": false 00:04:21.938 }, 00:04:21.938 "uuid": "24eefa88-1c49-4643-8abd-a5a6aea07ec8", 00:04:21.938 "zoned": false 00:04:21.938 }, 00:04:21.938 { 00:04:21.938 "aliases": [ 00:04:21.938 "77af04bb-6fe3-58d0-aadc-ffb423e18ec3" 00:04:21.938 ], 00:04:21.938 "assigned_rate_limits": { 00:04:21.938 "r_mbytes_per_sec": 0, 00:04:21.938 "rw_ios_per_sec": 0, 00:04:21.938 "rw_mbytes_per_sec": 0, 00:04:21.938 "w_mbytes_per_sec": 0 00:04:21.938 }, 00:04:21.938 "block_size": 512, 00:04:21.938 "claimed": false, 00:04:21.938 "driver_specific": { 00:04:21.938 "passthru": { 00:04:21.938 "base_bdev_name": "Malloc0", 00:04:21.938 "name": "Passthru0" 00:04:21.938 } 00:04:21.938 }, 00:04:21.938 "memory_domains": [ 00:04:21.938 { 00:04:21.938 "dma_device_id": "system", 00:04:21.938 "dma_device_type": 1 00:04:21.938 }, 00:04:21.938 { 00:04:21.938 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:21.938 "dma_device_type": 2 00:04:21.938 } 00:04:21.938 ], 00:04:21.938 "name": "Passthru0", 00:04:21.938 "num_blocks": 16384, 00:04:21.938 "product_name": "passthru", 00:04:21.938 "supported_io_types": { 00:04:21.938 "abort": true, 00:04:21.938 "compare": false, 00:04:21.938 "compare_and_write": false, 00:04:21.938 "copy": true, 00:04:21.938 "flush": true, 00:04:21.938 "get_zone_info": false, 00:04:21.938 "nvme_admin": false, 00:04:21.938 "nvme_io": false, 00:04:21.938 "nvme_io_md": false, 00:04:21.938 "nvme_iov_md": false, 00:04:21.938 "read": true, 00:04:21.938 "reset": true, 00:04:21.938 "seek_data": false, 00:04:21.938 "seek_hole": false, 00:04:21.938 "unmap": true, 00:04:21.938 "write": true, 00:04:21.938 "write_zeroes": true, 00:04:21.938 "zcopy": true, 00:04:21.938 "zone_append": false, 00:04:21.938 "zone_management": false 00:04:21.938 }, 00:04:21.938 "uuid": "77af04bb-6fe3-58d0-aadc-ffb423e18ec3", 00:04:21.938 "zoned": false 00:04:21.938 } 00:04:21.938 ]' 00:04:21.938 11:22:59 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:21.938 11:22:59 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:21.938 11:22:59 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:21.939 11:22:59 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:21.939 11:22:59 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:21.939 11:22:59 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:21.939 11:22:59 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:04:21.939 11:22:59 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:21.939 11:22:59 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:21.939 11:22:59 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:21.939 11:22:59 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:21.939 11:22:59 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:21.939 11:22:59 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:21.939 11:22:59 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:21.939 11:22:59 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:21.939 11:22:59 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:21.939 11:22:59 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:21.939 00:04:21.939 real 0m0.318s 00:04:21.939 user 0m0.209s 00:04:21.939 sys 0m0.041s 00:04:21.939 11:22:59 rpc.rpc_integrity -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:21.939 11:22:59 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:21.939 ************************************ 00:04:21.939 END TEST rpc_integrity 00:04:21.939 ************************************ 00:04:22.197 11:22:59 rpc -- common/autotest_common.sh@1142 -- # return 0 00:04:22.197 11:22:59 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:04:22.197 11:22:59 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:22.197 11:22:59 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:22.197 11:22:59 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:22.197 ************************************ 00:04:22.197 START TEST rpc_plugins 00:04:22.197 ************************************ 00:04:22.197 11:22:59 rpc.rpc_plugins -- common/autotest_common.sh@1123 -- # rpc_plugins 00:04:22.197 11:22:59 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:04:22.197 11:22:59 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:22.197 11:22:59 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:22.197 11:22:59 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:22.197 11:22:59 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:04:22.197 11:22:59 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:04:22.197 11:22:59 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:22.198 11:22:59 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:22.198 11:22:59 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:22.198 11:22:59 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:04:22.198 { 00:04:22.198 "aliases": [ 00:04:22.198 "16c8b6ab-9fb9-4248-93a7-9f1e788fe111" 00:04:22.198 ], 00:04:22.198 "assigned_rate_limits": { 00:04:22.198 "r_mbytes_per_sec": 0, 00:04:22.198 "rw_ios_per_sec": 0, 00:04:22.198 "rw_mbytes_per_sec": 0, 00:04:22.198 "w_mbytes_per_sec": 0 00:04:22.198 }, 00:04:22.198 "block_size": 4096, 00:04:22.198 "claimed": false, 00:04:22.198 "driver_specific": {}, 00:04:22.198 "memory_domains": [ 00:04:22.198 { 00:04:22.198 "dma_device_id": "system", 00:04:22.198 "dma_device_type": 1 00:04:22.198 }, 00:04:22.198 { 00:04:22.198 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:22.198 "dma_device_type": 2 00:04:22.198 } 00:04:22.198 ], 00:04:22.198 "name": "Malloc1", 00:04:22.198 "num_blocks": 256, 00:04:22.198 "product_name": "Malloc disk", 00:04:22.198 "supported_io_types": { 00:04:22.198 "abort": true, 00:04:22.198 "compare": false, 00:04:22.198 "compare_and_write": false, 00:04:22.198 "copy": true, 00:04:22.198 "flush": true, 00:04:22.198 "get_zone_info": false, 00:04:22.198 "nvme_admin": false, 00:04:22.198 "nvme_io": false, 00:04:22.198 "nvme_io_md": false, 00:04:22.198 "nvme_iov_md": false, 00:04:22.198 "read": true, 00:04:22.198 "reset": true, 00:04:22.198 "seek_data": false, 00:04:22.198 "seek_hole": false, 00:04:22.198 "unmap": true, 00:04:22.198 "write": true, 00:04:22.198 "write_zeroes": true, 00:04:22.198 "zcopy": true, 00:04:22.198 "zone_append": false, 00:04:22.198 "zone_management": false 00:04:22.198 }, 00:04:22.198 "uuid": "16c8b6ab-9fb9-4248-93a7-9f1e788fe111", 00:04:22.198 "zoned": false 00:04:22.198 } 00:04:22.198 ]' 00:04:22.198 11:22:59 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:04:22.198 11:22:59 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:04:22.198 11:22:59 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:04:22.198 11:22:59 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:22.198 11:22:59 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:22.198 11:22:59 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:22.198 11:22:59 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:04:22.198 11:22:59 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:22.198 11:22:59 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:22.198 11:22:59 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:22.198 11:22:59 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:04:22.198 11:22:59 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:04:22.198 11:22:59 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:04:22.198 00:04:22.198 real 0m0.163s 00:04:22.198 user 0m0.110s 00:04:22.198 sys 0m0.016s 00:04:22.198 11:22:59 rpc.rpc_plugins -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:22.198 11:22:59 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:22.198 ************************************ 00:04:22.198 END TEST rpc_plugins 00:04:22.198 ************************************ 00:04:22.198 11:22:59 rpc -- common/autotest_common.sh@1142 -- # return 0 00:04:22.198 11:22:59 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:04:22.198 11:22:59 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:22.198 11:22:59 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:22.198 11:22:59 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:22.198 ************************************ 00:04:22.198 START TEST rpc_trace_cmd_test 00:04:22.198 ************************************ 00:04:22.198 11:22:59 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1123 -- # rpc_trace_cmd_test 00:04:22.198 11:22:59 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:04:22.198 11:22:59 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:04:22.198 11:22:59 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:22.198 11:22:59 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:22.457 11:22:59 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:22.457 11:22:59 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:04:22.457 "bdev": { 00:04:22.457 "mask": "0x8", 00:04:22.457 "tpoint_mask": "0xffffffffffffffff" 00:04:22.457 }, 00:04:22.457 "bdev_nvme": { 00:04:22.457 "mask": "0x4000", 00:04:22.457 "tpoint_mask": "0x0" 00:04:22.457 }, 00:04:22.457 "blobfs": { 00:04:22.457 "mask": "0x80", 00:04:22.457 "tpoint_mask": "0x0" 00:04:22.457 }, 00:04:22.457 "dsa": { 00:04:22.457 "mask": "0x200", 00:04:22.457 "tpoint_mask": "0x0" 00:04:22.457 }, 00:04:22.457 "ftl": { 00:04:22.457 "mask": "0x40", 00:04:22.457 "tpoint_mask": "0x0" 00:04:22.457 }, 00:04:22.457 "iaa": { 00:04:22.457 "mask": "0x1000", 00:04:22.457 "tpoint_mask": "0x0" 00:04:22.457 }, 00:04:22.457 "iscsi_conn": { 00:04:22.457 "mask": "0x2", 00:04:22.457 "tpoint_mask": "0x0" 00:04:22.457 }, 00:04:22.457 "nvme_pcie": { 00:04:22.457 "mask": "0x800", 00:04:22.457 "tpoint_mask": "0x0" 00:04:22.457 }, 00:04:22.457 "nvme_tcp": { 00:04:22.457 "mask": "0x2000", 00:04:22.457 "tpoint_mask": "0x0" 00:04:22.457 }, 00:04:22.458 "nvmf_rdma": { 00:04:22.458 "mask": "0x10", 00:04:22.458 "tpoint_mask": "0x0" 00:04:22.458 }, 00:04:22.458 "nvmf_tcp": { 00:04:22.458 "mask": "0x20", 00:04:22.458 "tpoint_mask": "0x0" 00:04:22.458 }, 00:04:22.458 "scsi": { 00:04:22.458 "mask": "0x4", 00:04:22.458 "tpoint_mask": "0x0" 00:04:22.458 }, 00:04:22.458 "sock": { 00:04:22.458 "mask": "0x8000", 00:04:22.458 "tpoint_mask": "0x0" 00:04:22.458 }, 00:04:22.458 "thread": { 00:04:22.458 "mask": "0x400", 00:04:22.458 "tpoint_mask": "0x0" 00:04:22.458 }, 00:04:22.458 "tpoint_group_mask": "0x8", 00:04:22.458 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid60700" 00:04:22.458 }' 00:04:22.458 11:22:59 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:04:22.458 11:22:59 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 16 -gt 2 ']' 00:04:22.458 11:22:59 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:04:22.458 11:22:59 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:04:22.458 11:22:59 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:04:22.458 11:22:59 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:04:22.458 11:22:59 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:04:22.458 11:22:59 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:04:22.458 11:22:59 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:04:22.717 11:22:59 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:04:22.717 00:04:22.717 real 0m0.280s 00:04:22.717 user 0m0.237s 00:04:22.717 sys 0m0.031s 00:04:22.717 11:22:59 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:22.717 11:22:59 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:22.717 ************************************ 00:04:22.717 END TEST rpc_trace_cmd_test 00:04:22.717 ************************************ 00:04:22.717 11:22:59 rpc -- common/autotest_common.sh@1142 -- # return 0 00:04:22.717 11:22:59 rpc -- rpc/rpc.sh@76 -- # [[ 1 -eq 1 ]] 00:04:22.717 11:22:59 rpc -- rpc/rpc.sh@77 -- # run_test go_rpc go_rpc 00:04:22.717 11:22:59 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:22.717 11:22:59 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:22.717 11:22:59 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:22.717 ************************************ 00:04:22.717 START TEST go_rpc 00:04:22.717 ************************************ 00:04:22.717 11:22:59 rpc.go_rpc -- common/autotest_common.sh@1123 -- # go_rpc 00:04:22.717 11:23:00 rpc.go_rpc -- rpc/rpc.sh@51 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_gorpc 00:04:22.717 11:23:00 rpc.go_rpc -- rpc/rpc.sh@51 -- # bdevs='[]' 00:04:22.717 11:23:00 rpc.go_rpc -- rpc/rpc.sh@52 -- # jq length 00:04:22.717 11:23:00 rpc.go_rpc -- rpc/rpc.sh@52 -- # '[' 0 == 0 ']' 00:04:22.717 11:23:00 rpc.go_rpc -- rpc/rpc.sh@54 -- # rpc_cmd bdev_malloc_create 8 512 00:04:22.717 11:23:00 rpc.go_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:22.717 11:23:00 rpc.go_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:22.717 11:23:00 rpc.go_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:22.717 11:23:00 rpc.go_rpc -- rpc/rpc.sh@54 -- # malloc=Malloc2 00:04:22.717 11:23:00 rpc.go_rpc -- rpc/rpc.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_gorpc 00:04:22.717 11:23:00 rpc.go_rpc -- rpc/rpc.sh@56 -- # bdevs='[{"aliases":["21d4859b-5c02-4c32-a601-8de8ea5942bc"],"assigned_rate_limits":{"r_mbytes_per_sec":0,"rw_ios_per_sec":0,"rw_mbytes_per_sec":0,"w_mbytes_per_sec":0},"block_size":512,"claimed":false,"driver_specific":{},"memory_domains":[{"dma_device_id":"system","dma_device_type":1},{"dma_device_id":"SPDK_ACCEL_DMA_DEVICE","dma_device_type":2}],"name":"Malloc2","num_blocks":16384,"product_name":"Malloc disk","supported_io_types":{"abort":true,"compare":false,"compare_and_write":false,"copy":true,"flush":true,"get_zone_info":false,"nvme_admin":false,"nvme_io":false,"nvme_io_md":false,"nvme_iov_md":false,"read":true,"reset":true,"seek_data":false,"seek_hole":false,"unmap":true,"write":true,"write_zeroes":true,"zcopy":true,"zone_append":false,"zone_management":false},"uuid":"21d4859b-5c02-4c32-a601-8de8ea5942bc","zoned":false}]' 00:04:22.717 11:23:00 rpc.go_rpc -- rpc/rpc.sh@57 -- # jq length 00:04:22.717 11:23:00 rpc.go_rpc -- rpc/rpc.sh@57 -- # '[' 1 == 1 ']' 00:04:22.717 11:23:00 rpc.go_rpc -- rpc/rpc.sh@59 -- # rpc_cmd bdev_malloc_delete Malloc2 00:04:22.717 11:23:00 rpc.go_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:22.717 11:23:00 rpc.go_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:22.717 11:23:00 rpc.go_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:22.717 11:23:00 rpc.go_rpc -- rpc/rpc.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_gorpc 00:04:22.717 11:23:00 rpc.go_rpc -- rpc/rpc.sh@60 -- # bdevs='[]' 00:04:22.717 11:23:00 rpc.go_rpc -- rpc/rpc.sh@61 -- # jq length 00:04:22.977 11:23:00 rpc.go_rpc -- rpc/rpc.sh@61 -- # '[' 0 == 0 ']' 00:04:22.977 00:04:22.977 real 0m0.216s 00:04:22.977 user 0m0.152s 00:04:22.977 sys 0m0.031s 00:04:22.977 11:23:00 rpc.go_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:22.977 11:23:00 rpc.go_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:22.977 ************************************ 00:04:22.977 END TEST go_rpc 00:04:22.977 ************************************ 00:04:22.977 11:23:00 rpc -- common/autotest_common.sh@1142 -- # return 0 00:04:22.977 11:23:00 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:04:22.977 11:23:00 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:04:22.977 11:23:00 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:22.977 11:23:00 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:22.977 11:23:00 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:22.977 ************************************ 00:04:22.977 START TEST rpc_daemon_integrity 00:04:22.977 ************************************ 00:04:22.977 11:23:00 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1123 -- # rpc_integrity 00:04:22.977 11:23:00 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:22.977 11:23:00 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:22.977 11:23:00 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:22.977 11:23:00 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:22.977 11:23:00 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:22.977 11:23:00 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:22.977 11:23:00 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:22.977 11:23:00 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:22.977 11:23:00 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:22.977 11:23:00 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:22.977 11:23:00 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:22.977 11:23:00 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc3 00:04:22.977 11:23:00 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:22.977 11:23:00 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:22.977 11:23:00 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:22.977 11:23:00 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:22.977 11:23:00 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:22.977 { 00:04:22.977 "aliases": [ 00:04:22.977 "05e39ffa-a989-485a-9a29-e1ccdf00a807" 00:04:22.977 ], 00:04:22.977 "assigned_rate_limits": { 00:04:22.977 "r_mbytes_per_sec": 0, 00:04:22.977 "rw_ios_per_sec": 0, 00:04:22.977 "rw_mbytes_per_sec": 0, 00:04:22.977 "w_mbytes_per_sec": 0 00:04:22.977 }, 00:04:22.977 "block_size": 512, 00:04:22.977 "claimed": false, 00:04:22.977 "driver_specific": {}, 00:04:22.977 "memory_domains": [ 00:04:22.977 { 00:04:22.977 "dma_device_id": "system", 00:04:22.977 "dma_device_type": 1 00:04:22.977 }, 00:04:22.978 { 00:04:22.978 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:22.978 "dma_device_type": 2 00:04:22.978 } 00:04:22.978 ], 00:04:22.978 "name": "Malloc3", 00:04:22.978 "num_blocks": 16384, 00:04:22.978 "product_name": "Malloc disk", 00:04:22.978 "supported_io_types": { 00:04:22.978 "abort": true, 00:04:22.978 "compare": false, 00:04:22.978 "compare_and_write": false, 00:04:22.978 "copy": true, 00:04:22.978 "flush": true, 00:04:22.978 "get_zone_info": false, 00:04:22.978 "nvme_admin": false, 00:04:22.978 "nvme_io": false, 00:04:22.978 "nvme_io_md": false, 00:04:22.978 "nvme_iov_md": false, 00:04:22.978 "read": true, 00:04:22.978 "reset": true, 00:04:22.978 "seek_data": false, 00:04:22.978 "seek_hole": false, 00:04:22.978 "unmap": true, 00:04:22.978 "write": true, 00:04:22.978 "write_zeroes": true, 00:04:22.978 "zcopy": true, 00:04:22.978 "zone_append": false, 00:04:22.978 "zone_management": false 00:04:22.978 }, 00:04:22.978 "uuid": "05e39ffa-a989-485a-9a29-e1ccdf00a807", 00:04:22.978 "zoned": false 00:04:22.978 } 00:04:22.978 ]' 00:04:22.978 11:23:00 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:22.978 11:23:00 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:22.978 11:23:00 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc3 -p Passthru0 00:04:22.978 11:23:00 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:22.978 11:23:00 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:22.978 [2024-07-15 11:23:00.423841] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:04:22.978 [2024-07-15 11:23:00.423894] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:22.978 [2024-07-15 11:23:00.423916] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1f57d70 00:04:22.978 [2024-07-15 11:23:00.423926] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:22.978 [2024-07-15 11:23:00.425325] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:22.978 [2024-07-15 11:23:00.425355] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:22.978 Passthru0 00:04:22.978 11:23:00 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:22.978 11:23:00 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:22.978 11:23:00 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:22.978 11:23:00 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:23.237 11:23:00 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:23.237 11:23:00 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:23.237 { 00:04:23.237 "aliases": [ 00:04:23.237 "05e39ffa-a989-485a-9a29-e1ccdf00a807" 00:04:23.237 ], 00:04:23.237 "assigned_rate_limits": { 00:04:23.237 "r_mbytes_per_sec": 0, 00:04:23.237 "rw_ios_per_sec": 0, 00:04:23.237 "rw_mbytes_per_sec": 0, 00:04:23.237 "w_mbytes_per_sec": 0 00:04:23.237 }, 00:04:23.237 "block_size": 512, 00:04:23.237 "claim_type": "exclusive_write", 00:04:23.237 "claimed": true, 00:04:23.237 "driver_specific": {}, 00:04:23.237 "memory_domains": [ 00:04:23.237 { 00:04:23.237 "dma_device_id": "system", 00:04:23.237 "dma_device_type": 1 00:04:23.237 }, 00:04:23.237 { 00:04:23.237 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:23.237 "dma_device_type": 2 00:04:23.237 } 00:04:23.237 ], 00:04:23.237 "name": "Malloc3", 00:04:23.237 "num_blocks": 16384, 00:04:23.237 "product_name": "Malloc disk", 00:04:23.237 "supported_io_types": { 00:04:23.237 "abort": true, 00:04:23.237 "compare": false, 00:04:23.237 "compare_and_write": false, 00:04:23.237 "copy": true, 00:04:23.237 "flush": true, 00:04:23.237 "get_zone_info": false, 00:04:23.237 "nvme_admin": false, 00:04:23.237 "nvme_io": false, 00:04:23.237 "nvme_io_md": false, 00:04:23.237 "nvme_iov_md": false, 00:04:23.237 "read": true, 00:04:23.237 "reset": true, 00:04:23.237 "seek_data": false, 00:04:23.237 "seek_hole": false, 00:04:23.237 "unmap": true, 00:04:23.237 "write": true, 00:04:23.237 "write_zeroes": true, 00:04:23.237 "zcopy": true, 00:04:23.237 "zone_append": false, 00:04:23.237 "zone_management": false 00:04:23.237 }, 00:04:23.237 "uuid": "05e39ffa-a989-485a-9a29-e1ccdf00a807", 00:04:23.237 "zoned": false 00:04:23.237 }, 00:04:23.237 { 00:04:23.237 "aliases": [ 00:04:23.237 "49432ba1-3895-50f5-add4-98d07506d67b" 00:04:23.237 ], 00:04:23.237 "assigned_rate_limits": { 00:04:23.237 "r_mbytes_per_sec": 0, 00:04:23.237 "rw_ios_per_sec": 0, 00:04:23.237 "rw_mbytes_per_sec": 0, 00:04:23.237 "w_mbytes_per_sec": 0 00:04:23.237 }, 00:04:23.237 "block_size": 512, 00:04:23.237 "claimed": false, 00:04:23.237 "driver_specific": { 00:04:23.237 "passthru": { 00:04:23.237 "base_bdev_name": "Malloc3", 00:04:23.237 "name": "Passthru0" 00:04:23.237 } 00:04:23.237 }, 00:04:23.237 "memory_domains": [ 00:04:23.237 { 00:04:23.237 "dma_device_id": "system", 00:04:23.237 "dma_device_type": 1 00:04:23.237 }, 00:04:23.237 { 00:04:23.237 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:23.237 "dma_device_type": 2 00:04:23.237 } 00:04:23.237 ], 00:04:23.237 "name": "Passthru0", 00:04:23.237 "num_blocks": 16384, 00:04:23.237 "product_name": "passthru", 00:04:23.237 "supported_io_types": { 00:04:23.237 "abort": true, 00:04:23.237 "compare": false, 00:04:23.237 "compare_and_write": false, 00:04:23.237 "copy": true, 00:04:23.237 "flush": true, 00:04:23.237 "get_zone_info": false, 00:04:23.237 "nvme_admin": false, 00:04:23.237 "nvme_io": false, 00:04:23.237 "nvme_io_md": false, 00:04:23.237 "nvme_iov_md": false, 00:04:23.237 "read": true, 00:04:23.237 "reset": true, 00:04:23.237 "seek_data": false, 00:04:23.237 "seek_hole": false, 00:04:23.237 "unmap": true, 00:04:23.237 "write": true, 00:04:23.237 "write_zeroes": true, 00:04:23.237 "zcopy": true, 00:04:23.237 "zone_append": false, 00:04:23.237 "zone_management": false 00:04:23.237 }, 00:04:23.237 "uuid": "49432ba1-3895-50f5-add4-98d07506d67b", 00:04:23.237 "zoned": false 00:04:23.237 } 00:04:23.237 ]' 00:04:23.237 11:23:00 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:23.237 11:23:00 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:23.237 11:23:00 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:23.237 11:23:00 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:23.237 11:23:00 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:23.237 11:23:00 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:23.237 11:23:00 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc3 00:04:23.237 11:23:00 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:23.237 11:23:00 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:23.237 11:23:00 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:23.237 11:23:00 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:23.237 11:23:00 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:23.237 11:23:00 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:23.237 11:23:00 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:23.237 11:23:00 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:23.237 11:23:00 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:23.237 11:23:00 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:23.237 00:04:23.237 real 0m0.329s 00:04:23.237 user 0m0.218s 00:04:23.237 sys 0m0.044s 00:04:23.237 11:23:00 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:23.237 11:23:00 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:23.237 ************************************ 00:04:23.237 END TEST rpc_daemon_integrity 00:04:23.237 ************************************ 00:04:23.237 11:23:00 rpc -- common/autotest_common.sh@1142 -- # return 0 00:04:23.237 11:23:00 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:04:23.237 11:23:00 rpc -- rpc/rpc.sh@84 -- # killprocess 60700 00:04:23.237 11:23:00 rpc -- common/autotest_common.sh@948 -- # '[' -z 60700 ']' 00:04:23.237 11:23:00 rpc -- common/autotest_common.sh@952 -- # kill -0 60700 00:04:23.237 11:23:00 rpc -- common/autotest_common.sh@953 -- # uname 00:04:23.237 11:23:00 rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:23.237 11:23:00 rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 60700 00:04:23.237 11:23:00 rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:23.237 11:23:00 rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:23.237 killing process with pid 60700 00:04:23.237 11:23:00 rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 60700' 00:04:23.237 11:23:00 rpc -- common/autotest_common.sh@967 -- # kill 60700 00:04:23.237 11:23:00 rpc -- common/autotest_common.sh@972 -- # wait 60700 00:04:23.507 00:04:23.507 real 0m3.005s 00:04:23.507 user 0m4.173s 00:04:23.507 sys 0m0.619s 00:04:23.507 11:23:00 rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:23.507 11:23:00 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:23.507 ************************************ 00:04:23.507 END TEST rpc 00:04:23.507 ************************************ 00:04:23.507 11:23:00 -- common/autotest_common.sh@1142 -- # return 0 00:04:23.507 11:23:00 -- spdk/autotest.sh@170 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:04:23.507 11:23:00 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:23.507 11:23:00 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:23.507 11:23:00 -- common/autotest_common.sh@10 -- # set +x 00:04:23.507 ************************************ 00:04:23.507 START TEST skip_rpc 00:04:23.507 ************************************ 00:04:23.507 11:23:00 skip_rpc -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:04:23.776 * Looking for test storage... 00:04:23.776 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:04:23.776 11:23:01 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:23.776 11:23:01 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:04:23.776 11:23:01 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:04:23.776 11:23:01 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:23.776 11:23:01 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:23.776 11:23:01 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:23.776 ************************************ 00:04:23.776 START TEST skip_rpc 00:04:23.776 ************************************ 00:04:23.776 11:23:01 skip_rpc.skip_rpc -- common/autotest_common.sh@1123 -- # test_skip_rpc 00:04:23.776 11:23:01 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=60961 00:04:23.776 11:23:01 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:23.776 11:23:01 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:04:23.776 11:23:01 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:04:23.776 [2024-07-15 11:23:01.125474] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:04:23.776 [2024-07-15 11:23:01.125631] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60961 ] 00:04:24.034 [2024-07-15 11:23:01.264797] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:24.034 [2024-07-15 11:23:01.338156] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:29.301 11:23:06 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:04:29.301 11:23:06 skip_rpc.skip_rpc -- common/autotest_common.sh@648 -- # local es=0 00:04:29.301 11:23:06 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd spdk_get_version 00:04:29.301 11:23:06 skip_rpc.skip_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:04:29.301 11:23:06 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:29.301 11:23:06 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:04:29.301 11:23:06 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:29.301 11:23:06 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # rpc_cmd spdk_get_version 00:04:29.301 11:23:06 skip_rpc.skip_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:29.301 11:23:06 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:29.301 2024/07/15 11:23:06 error on client creation, err: error during client creation for Unix socket, err: could not connect to a Unix socket on address /var/tmp/spdk.sock, err: dial unix /var/tmp/spdk.sock: connect: no such file or directory 00:04:29.301 11:23:06 skip_rpc.skip_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:04:29.301 11:23:06 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # es=1 00:04:29.301 11:23:06 skip_rpc.skip_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:04:29.301 11:23:06 skip_rpc.skip_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:04:29.301 11:23:06 skip_rpc.skip_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:04:29.301 11:23:06 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:04:29.301 11:23:06 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 60961 00:04:29.301 11:23:06 skip_rpc.skip_rpc -- common/autotest_common.sh@948 -- # '[' -z 60961 ']' 00:04:29.301 11:23:06 skip_rpc.skip_rpc -- common/autotest_common.sh@952 -- # kill -0 60961 00:04:29.301 11:23:06 skip_rpc.skip_rpc -- common/autotest_common.sh@953 -- # uname 00:04:29.301 11:23:06 skip_rpc.skip_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:29.301 11:23:06 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 60961 00:04:29.301 11:23:06 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:29.301 11:23:06 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:29.301 11:23:06 skip_rpc.skip_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 60961' 00:04:29.301 killing process with pid 60961 00:04:29.301 11:23:06 skip_rpc.skip_rpc -- common/autotest_common.sh@967 -- # kill 60961 00:04:29.301 11:23:06 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # wait 60961 00:04:29.301 00:04:29.301 real 0m5.294s 00:04:29.301 user 0m4.997s 00:04:29.301 sys 0m0.188s 00:04:29.301 11:23:06 skip_rpc.skip_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:29.301 11:23:06 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:29.301 ************************************ 00:04:29.301 END TEST skip_rpc 00:04:29.301 ************************************ 00:04:29.301 11:23:06 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:04:29.301 11:23:06 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:04:29.301 11:23:06 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:29.301 11:23:06 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:29.301 11:23:06 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:29.301 ************************************ 00:04:29.301 START TEST skip_rpc_with_json 00:04:29.301 ************************************ 00:04:29.301 11:23:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1123 -- # test_skip_rpc_with_json 00:04:29.301 11:23:06 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:04:29.301 11:23:06 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=61048 00:04:29.301 11:23:06 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:29.301 11:23:06 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 61048 00:04:29.301 11:23:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@829 -- # '[' -z 61048 ']' 00:04:29.301 11:23:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:29.301 11:23:06 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:04:29.301 11:23:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:29.301 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:29.301 11:23:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:29.301 11:23:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:29.301 11:23:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:29.301 [2024-07-15 11:23:06.449392] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:04:29.301 [2024-07-15 11:23:06.449487] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61048 ] 00:04:29.301 [2024-07-15 11:23:06.584337] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:29.301 [2024-07-15 11:23:06.643360] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:29.559 11:23:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:29.559 11:23:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@862 -- # return 0 00:04:29.559 11:23:06 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:04:29.559 11:23:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:29.559 11:23:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:29.559 [2024-07-15 11:23:06.817370] nvmf_rpc.c:2562:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:04:29.559 2024/07/15 11:23:06 error on JSON-RPC call, method: nvmf_get_transports, params: map[trtype:tcp], err: error received for nvmf_get_transports method, err: Code=-19 Msg=No such device 00:04:29.559 request: 00:04:29.559 { 00:04:29.559 "method": "nvmf_get_transports", 00:04:29.559 "params": { 00:04:29.559 "trtype": "tcp" 00:04:29.559 } 00:04:29.559 } 00:04:29.559 Got JSON-RPC error response 00:04:29.559 GoRPCClient: error on JSON-RPC call 00:04:29.559 11:23:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:04:29.559 11:23:06 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:04:29.559 11:23:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:29.559 11:23:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:29.559 [2024-07-15 11:23:06.829449] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:29.559 11:23:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:29.559 11:23:06 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:04:29.559 11:23:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:29.559 11:23:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:29.559 11:23:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:29.559 11:23:06 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:29.559 { 00:04:29.559 "subsystems": [ 00:04:29.559 { 00:04:29.559 "subsystem": "keyring", 00:04:29.559 "config": [] 00:04:29.559 }, 00:04:29.559 { 00:04:29.559 "subsystem": "iobuf", 00:04:29.559 "config": [ 00:04:29.559 { 00:04:29.559 "method": "iobuf_set_options", 00:04:29.559 "params": { 00:04:29.559 "large_bufsize": 135168, 00:04:29.559 "large_pool_count": 1024, 00:04:29.559 "small_bufsize": 8192, 00:04:29.559 "small_pool_count": 8192 00:04:29.559 } 00:04:29.559 } 00:04:29.559 ] 00:04:29.559 }, 00:04:29.559 { 00:04:29.559 "subsystem": "sock", 00:04:29.559 "config": [ 00:04:29.559 { 00:04:29.559 "method": "sock_set_default_impl", 00:04:29.559 "params": { 00:04:29.560 "impl_name": "posix" 00:04:29.560 } 00:04:29.560 }, 00:04:29.560 { 00:04:29.560 "method": "sock_impl_set_options", 00:04:29.560 "params": { 00:04:29.560 "enable_ktls": false, 00:04:29.560 "enable_placement_id": 0, 00:04:29.560 "enable_quickack": false, 00:04:29.560 "enable_recv_pipe": true, 00:04:29.560 "enable_zerocopy_send_client": false, 00:04:29.560 "enable_zerocopy_send_server": true, 00:04:29.560 "impl_name": "ssl", 00:04:29.560 "recv_buf_size": 4096, 00:04:29.560 "send_buf_size": 4096, 00:04:29.560 "tls_version": 0, 00:04:29.560 "zerocopy_threshold": 0 00:04:29.560 } 00:04:29.560 }, 00:04:29.560 { 00:04:29.560 "method": "sock_impl_set_options", 00:04:29.560 "params": { 00:04:29.560 "enable_ktls": false, 00:04:29.560 "enable_placement_id": 0, 00:04:29.560 "enable_quickack": false, 00:04:29.560 "enable_recv_pipe": true, 00:04:29.560 "enable_zerocopy_send_client": false, 00:04:29.560 "enable_zerocopy_send_server": true, 00:04:29.560 "impl_name": "posix", 00:04:29.560 "recv_buf_size": 2097152, 00:04:29.560 "send_buf_size": 2097152, 00:04:29.560 "tls_version": 0, 00:04:29.560 "zerocopy_threshold": 0 00:04:29.560 } 00:04:29.560 } 00:04:29.560 ] 00:04:29.560 }, 00:04:29.560 { 00:04:29.560 "subsystem": "vmd", 00:04:29.560 "config": [] 00:04:29.560 }, 00:04:29.560 { 00:04:29.560 "subsystem": "accel", 00:04:29.560 "config": [ 00:04:29.560 { 00:04:29.560 "method": "accel_set_options", 00:04:29.560 "params": { 00:04:29.560 "buf_count": 2048, 00:04:29.560 "large_cache_size": 16, 00:04:29.560 "sequence_count": 2048, 00:04:29.560 "small_cache_size": 128, 00:04:29.560 "task_count": 2048 00:04:29.560 } 00:04:29.560 } 00:04:29.560 ] 00:04:29.560 }, 00:04:29.560 { 00:04:29.560 "subsystem": "bdev", 00:04:29.560 "config": [ 00:04:29.560 { 00:04:29.560 "method": "bdev_set_options", 00:04:29.560 "params": { 00:04:29.560 "bdev_auto_examine": true, 00:04:29.560 "bdev_io_cache_size": 256, 00:04:29.560 "bdev_io_pool_size": 65535, 00:04:29.560 "iobuf_large_cache_size": 16, 00:04:29.560 "iobuf_small_cache_size": 128 00:04:29.560 } 00:04:29.560 }, 00:04:29.560 { 00:04:29.560 "method": "bdev_raid_set_options", 00:04:29.560 "params": { 00:04:29.560 "process_window_size_kb": 1024 00:04:29.560 } 00:04:29.560 }, 00:04:29.560 { 00:04:29.560 "method": "bdev_iscsi_set_options", 00:04:29.560 "params": { 00:04:29.560 "timeout_sec": 30 00:04:29.560 } 00:04:29.560 }, 00:04:29.560 { 00:04:29.560 "method": "bdev_nvme_set_options", 00:04:29.560 "params": { 00:04:29.560 "action_on_timeout": "none", 00:04:29.560 "allow_accel_sequence": false, 00:04:29.560 "arbitration_burst": 0, 00:04:29.560 "bdev_retry_count": 3, 00:04:29.560 "ctrlr_loss_timeout_sec": 0, 00:04:29.560 "delay_cmd_submit": true, 00:04:29.560 "dhchap_dhgroups": [ 00:04:29.560 "null", 00:04:29.560 "ffdhe2048", 00:04:29.560 "ffdhe3072", 00:04:29.560 "ffdhe4096", 00:04:29.560 "ffdhe6144", 00:04:29.560 "ffdhe8192" 00:04:29.560 ], 00:04:29.560 "dhchap_digests": [ 00:04:29.560 "sha256", 00:04:29.560 "sha384", 00:04:29.560 "sha512" 00:04:29.560 ], 00:04:29.560 "disable_auto_failback": false, 00:04:29.560 "fast_io_fail_timeout_sec": 0, 00:04:29.560 "generate_uuids": false, 00:04:29.560 "high_priority_weight": 0, 00:04:29.560 "io_path_stat": false, 00:04:29.560 "io_queue_requests": 0, 00:04:29.560 "keep_alive_timeout_ms": 10000, 00:04:29.560 "low_priority_weight": 0, 00:04:29.560 "medium_priority_weight": 0, 00:04:29.560 "nvme_adminq_poll_period_us": 10000, 00:04:29.560 "nvme_error_stat": false, 00:04:29.560 "nvme_ioq_poll_period_us": 0, 00:04:29.560 "rdma_cm_event_timeout_ms": 0, 00:04:29.560 "rdma_max_cq_size": 0, 00:04:29.560 "rdma_srq_size": 0, 00:04:29.560 "reconnect_delay_sec": 0, 00:04:29.560 "timeout_admin_us": 0, 00:04:29.560 "timeout_us": 0, 00:04:29.560 "transport_ack_timeout": 0, 00:04:29.560 "transport_retry_count": 4, 00:04:29.560 "transport_tos": 0 00:04:29.560 } 00:04:29.560 }, 00:04:29.560 { 00:04:29.560 "method": "bdev_nvme_set_hotplug", 00:04:29.560 "params": { 00:04:29.560 "enable": false, 00:04:29.560 "period_us": 100000 00:04:29.560 } 00:04:29.560 }, 00:04:29.560 { 00:04:29.560 "method": "bdev_wait_for_examine" 00:04:29.560 } 00:04:29.560 ] 00:04:29.560 }, 00:04:29.560 { 00:04:29.560 "subsystem": "scsi", 00:04:29.560 "config": null 00:04:29.560 }, 00:04:29.560 { 00:04:29.560 "subsystem": "scheduler", 00:04:29.560 "config": [ 00:04:29.560 { 00:04:29.560 "method": "framework_set_scheduler", 00:04:29.560 "params": { 00:04:29.560 "name": "static" 00:04:29.560 } 00:04:29.560 } 00:04:29.560 ] 00:04:29.560 }, 00:04:29.560 { 00:04:29.560 "subsystem": "vhost_scsi", 00:04:29.560 "config": [] 00:04:29.560 }, 00:04:29.560 { 00:04:29.560 "subsystem": "vhost_blk", 00:04:29.560 "config": [] 00:04:29.560 }, 00:04:29.560 { 00:04:29.560 "subsystem": "ublk", 00:04:29.560 "config": [] 00:04:29.560 }, 00:04:29.560 { 00:04:29.560 "subsystem": "nbd", 00:04:29.560 "config": [] 00:04:29.560 }, 00:04:29.560 { 00:04:29.560 "subsystem": "nvmf", 00:04:29.560 "config": [ 00:04:29.560 { 00:04:29.560 "method": "nvmf_set_config", 00:04:29.560 "params": { 00:04:29.560 "admin_cmd_passthru": { 00:04:29.560 "identify_ctrlr": false 00:04:29.560 }, 00:04:29.560 "discovery_filter": "match_any" 00:04:29.560 } 00:04:29.560 }, 00:04:29.560 { 00:04:29.560 "method": "nvmf_set_max_subsystems", 00:04:29.560 "params": { 00:04:29.560 "max_subsystems": 1024 00:04:29.560 } 00:04:29.560 }, 00:04:29.560 { 00:04:29.560 "method": "nvmf_set_crdt", 00:04:29.560 "params": { 00:04:29.560 "crdt1": 0, 00:04:29.560 "crdt2": 0, 00:04:29.560 "crdt3": 0 00:04:29.560 } 00:04:29.560 }, 00:04:29.560 { 00:04:29.560 "method": "nvmf_create_transport", 00:04:29.560 "params": { 00:04:29.560 "abort_timeout_sec": 1, 00:04:29.560 "ack_timeout": 0, 00:04:29.560 "buf_cache_size": 4294967295, 00:04:29.560 "c2h_success": true, 00:04:29.560 "data_wr_pool_size": 0, 00:04:29.560 "dif_insert_or_strip": false, 00:04:29.560 "in_capsule_data_size": 4096, 00:04:29.560 "io_unit_size": 131072, 00:04:29.560 "max_aq_depth": 128, 00:04:29.560 "max_io_qpairs_per_ctrlr": 127, 00:04:29.560 "max_io_size": 131072, 00:04:29.560 "max_queue_depth": 128, 00:04:29.560 "num_shared_buffers": 511, 00:04:29.560 "sock_priority": 0, 00:04:29.560 "trtype": "TCP", 00:04:29.560 "zcopy": false 00:04:29.560 } 00:04:29.560 } 00:04:29.560 ] 00:04:29.560 }, 00:04:29.560 { 00:04:29.560 "subsystem": "iscsi", 00:04:29.560 "config": [ 00:04:29.560 { 00:04:29.560 "method": "iscsi_set_options", 00:04:29.560 "params": { 00:04:29.560 "allow_duplicated_isid": false, 00:04:29.560 "chap_group": 0, 00:04:29.560 "data_out_pool_size": 2048, 00:04:29.560 "default_time2retain": 20, 00:04:29.560 "default_time2wait": 2, 00:04:29.560 "disable_chap": false, 00:04:29.560 "error_recovery_level": 0, 00:04:29.560 "first_burst_length": 8192, 00:04:29.560 "immediate_data": true, 00:04:29.560 "immediate_data_pool_size": 16384, 00:04:29.560 "max_connections_per_session": 2, 00:04:29.560 "max_large_datain_per_connection": 64, 00:04:29.560 "max_queue_depth": 64, 00:04:29.560 "max_r2t_per_connection": 4, 00:04:29.560 "max_sessions": 128, 00:04:29.560 "mutual_chap": false, 00:04:29.560 "node_base": "iqn.2016-06.io.spdk", 00:04:29.560 "nop_in_interval": 30, 00:04:29.560 "nop_timeout": 60, 00:04:29.560 "pdu_pool_size": 36864, 00:04:29.560 "require_chap": false 00:04:29.560 } 00:04:29.560 } 00:04:29.560 ] 00:04:29.560 } 00:04:29.560 ] 00:04:29.560 } 00:04:29.560 11:23:06 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:04:29.560 11:23:06 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 61048 00:04:29.560 11:23:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@948 -- # '[' -z 61048 ']' 00:04:29.560 11:23:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # kill -0 61048 00:04:29.560 11:23:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # uname 00:04:29.560 11:23:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:29.560 11:23:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 61048 00:04:29.560 11:23:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:29.560 11:23:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:29.560 killing process with pid 61048 00:04:29.560 11:23:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@966 -- # echo 'killing process with pid 61048' 00:04:29.560 11:23:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@967 -- # kill 61048 00:04:29.560 11:23:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # wait 61048 00:04:29.819 11:23:07 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=61074 00:04:29.819 11:23:07 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:04:29.819 11:23:07 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:35.117 11:23:12 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 61074 00:04:35.117 11:23:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@948 -- # '[' -z 61074 ']' 00:04:35.117 11:23:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # kill -0 61074 00:04:35.117 11:23:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # uname 00:04:35.117 11:23:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:35.117 11:23:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 61074 00:04:35.117 11:23:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:35.117 11:23:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:35.117 killing process with pid 61074 00:04:35.117 11:23:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@966 -- # echo 'killing process with pid 61074' 00:04:35.117 11:23:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@967 -- # kill 61074 00:04:35.117 11:23:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # wait 61074 00:04:35.117 11:23:12 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:04:35.117 11:23:12 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:04:35.117 00:04:35.117 real 0m6.172s 00:04:35.117 user 0m5.908s 00:04:35.117 sys 0m0.400s 00:04:35.117 11:23:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:35.117 11:23:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:35.117 ************************************ 00:04:35.117 END TEST skip_rpc_with_json 00:04:35.117 ************************************ 00:04:35.375 11:23:12 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:04:35.375 11:23:12 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:04:35.375 11:23:12 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:35.375 11:23:12 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:35.375 11:23:12 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:35.375 ************************************ 00:04:35.375 START TEST skip_rpc_with_delay 00:04:35.375 ************************************ 00:04:35.375 11:23:12 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1123 -- # test_skip_rpc_with_delay 00:04:35.375 11:23:12 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:35.375 11:23:12 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@648 -- # local es=0 00:04:35.375 11:23:12 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:35.375 11:23:12 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:35.375 11:23:12 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:35.375 11:23:12 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:35.375 11:23:12 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:35.375 11:23:12 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:35.375 11:23:12 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:35.375 11:23:12 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:35.375 11:23:12 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:04:35.375 11:23:12 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:35.375 [2024-07-15 11:23:12.678924] app.c: 832:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:04:35.375 [2024-07-15 11:23:12.679079] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:04:35.375 11:23:12 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # es=1 00:04:35.375 11:23:12 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:04:35.375 11:23:12 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:04:35.375 11:23:12 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:04:35.375 00:04:35.375 real 0m0.078s 00:04:35.375 user 0m0.049s 00:04:35.375 sys 0m0.028s 00:04:35.375 11:23:12 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:35.375 ************************************ 00:04:35.375 END TEST skip_rpc_with_delay 00:04:35.375 11:23:12 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:04:35.375 ************************************ 00:04:35.375 11:23:12 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:04:35.375 11:23:12 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:04:35.375 11:23:12 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:04:35.375 11:23:12 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:04:35.375 11:23:12 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:35.375 11:23:12 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:35.375 11:23:12 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:35.375 ************************************ 00:04:35.375 START TEST exit_on_failed_rpc_init 00:04:35.375 ************************************ 00:04:35.375 11:23:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1123 -- # test_exit_on_failed_rpc_init 00:04:35.375 11:23:12 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=61178 00:04:35.375 11:23:12 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 61178 00:04:35.375 11:23:12 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:04:35.375 11:23:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@829 -- # '[' -z 61178 ']' 00:04:35.375 11:23:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:35.375 11:23:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:35.375 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:35.375 11:23:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:35.375 11:23:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:35.375 11:23:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:35.375 [2024-07-15 11:23:12.807466] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:04:35.376 [2024-07-15 11:23:12.807564] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61178 ] 00:04:35.689 [2024-07-15 11:23:12.943634] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:35.689 [2024-07-15 11:23:13.002001] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:36.624 11:23:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:36.624 11:23:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@862 -- # return 0 00:04:36.624 11:23:13 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:36.624 11:23:13 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:04:36.624 11:23:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@648 -- # local es=0 00:04:36.624 11:23:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:04:36.624 11:23:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:36.624 11:23:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:36.624 11:23:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:36.624 11:23:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:36.624 11:23:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:36.624 11:23:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:36.624 11:23:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:36.624 11:23:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:04:36.624 11:23:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:04:36.624 [2024-07-15 11:23:13.860097] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:04:36.624 [2024-07-15 11:23:13.860213] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61208 ] 00:04:36.624 [2024-07-15 11:23:13.999376] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:36.624 [2024-07-15 11:23:14.069192] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:04:36.624 [2024-07-15 11:23:14.069291] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:04:36.624 [2024-07-15 11:23:14.069309] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:04:36.624 [2024-07-15 11:23:14.069320] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:04:36.882 11:23:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # es=234 00:04:36.882 11:23:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:04:36.882 11:23:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@660 -- # es=106 00:04:36.882 11:23:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # case "$es" in 00:04:36.882 11:23:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@668 -- # es=1 00:04:36.882 11:23:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:04:36.882 11:23:14 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:04:36.882 11:23:14 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 61178 00:04:36.882 11:23:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@948 -- # '[' -z 61178 ']' 00:04:36.882 11:23:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@952 -- # kill -0 61178 00:04:36.882 11:23:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@953 -- # uname 00:04:36.882 11:23:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:36.882 11:23:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 61178 00:04:36.882 11:23:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:36.882 11:23:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:36.882 killing process with pid 61178 00:04:36.882 11:23:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@966 -- # echo 'killing process with pid 61178' 00:04:36.882 11:23:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@967 -- # kill 61178 00:04:36.882 11:23:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # wait 61178 00:04:37.140 00:04:37.140 real 0m1.693s 00:04:37.140 user 0m2.108s 00:04:37.140 sys 0m0.295s 00:04:37.140 11:23:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:37.140 11:23:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:37.140 ************************************ 00:04:37.140 END TEST exit_on_failed_rpc_init 00:04:37.140 ************************************ 00:04:37.140 11:23:14 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:04:37.140 11:23:14 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:37.140 00:04:37.140 real 0m13.526s 00:04:37.140 user 0m13.164s 00:04:37.140 sys 0m1.084s 00:04:37.140 11:23:14 skip_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:37.140 11:23:14 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:37.140 ************************************ 00:04:37.140 END TEST skip_rpc 00:04:37.140 ************************************ 00:04:37.140 11:23:14 -- common/autotest_common.sh@1142 -- # return 0 00:04:37.140 11:23:14 -- spdk/autotest.sh@171 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:04:37.140 11:23:14 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:37.140 11:23:14 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:37.140 11:23:14 -- common/autotest_common.sh@10 -- # set +x 00:04:37.140 ************************************ 00:04:37.140 START TEST rpc_client 00:04:37.140 ************************************ 00:04:37.140 11:23:14 rpc_client -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:04:37.400 * Looking for test storage... 00:04:37.400 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:04:37.400 11:23:14 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:04:37.400 OK 00:04:37.400 11:23:14 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:04:37.400 00:04:37.400 real 0m0.103s 00:04:37.400 user 0m0.052s 00:04:37.400 sys 0m0.056s 00:04:37.400 11:23:14 rpc_client -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:37.400 11:23:14 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:04:37.400 ************************************ 00:04:37.400 END TEST rpc_client 00:04:37.400 ************************************ 00:04:37.400 11:23:14 -- common/autotest_common.sh@1142 -- # return 0 00:04:37.400 11:23:14 -- spdk/autotest.sh@172 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:04:37.400 11:23:14 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:37.400 11:23:14 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:37.400 11:23:14 -- common/autotest_common.sh@10 -- # set +x 00:04:37.400 ************************************ 00:04:37.400 START TEST json_config 00:04:37.400 ************************************ 00:04:37.400 11:23:14 json_config -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:04:37.400 11:23:14 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:04:37.400 11:23:14 json_config -- nvmf/common.sh@7 -- # uname -s 00:04:37.400 11:23:14 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:37.400 11:23:14 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:37.400 11:23:14 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:37.400 11:23:14 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:37.400 11:23:14 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:37.400 11:23:14 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:37.400 11:23:14 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:37.400 11:23:14 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:37.400 11:23:14 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:37.400 11:23:14 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:37.400 11:23:14 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:891080d4-f96c-4735-b9e2-e3ce9892e421 00:04:37.400 11:23:14 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=891080d4-f96c-4735-b9e2-e3ce9892e421 00:04:37.400 11:23:14 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:37.400 11:23:14 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:37.400 11:23:14 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:37.400 11:23:14 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:37.400 11:23:14 json_config -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:37.400 11:23:14 json_config -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:37.400 11:23:14 json_config -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:37.400 11:23:14 json_config -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:37.400 11:23:14 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:37.400 11:23:14 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:37.401 11:23:14 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:37.401 11:23:14 json_config -- paths/export.sh@5 -- # export PATH 00:04:37.401 11:23:14 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:37.401 11:23:14 json_config -- nvmf/common.sh@47 -- # : 0 00:04:37.401 11:23:14 json_config -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:04:37.401 11:23:14 json_config -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:04:37.401 11:23:14 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:37.401 11:23:14 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:37.401 11:23:14 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:37.401 11:23:14 json_config -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:04:37.401 11:23:14 json_config -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:04:37.401 11:23:14 json_config -- nvmf/common.sh@51 -- # have_pci_nics=0 00:04:37.401 11:23:14 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:04:37.401 11:23:14 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:04:37.401 11:23:14 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:04:37.401 11:23:14 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:04:37.401 11:23:14 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:04:37.401 11:23:14 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:04:37.401 11:23:14 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:04:37.401 11:23:14 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:04:37.401 11:23:14 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:04:37.401 11:23:14 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:04:37.401 11:23:14 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:04:37.401 11:23:14 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/spdk_tgt_config.json' ['initiator']='/home/vagrant/spdk_repo/spdk/spdk_initiator_config.json') 00:04:37.401 11:23:14 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:04:37.401 11:23:14 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:04:37.401 11:23:14 json_config -- json_config/json_config.sh@355 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:37.401 INFO: JSON configuration test init 00:04:37.401 11:23:14 json_config -- json_config/json_config.sh@356 -- # echo 'INFO: JSON configuration test init' 00:04:37.401 11:23:14 json_config -- json_config/json_config.sh@357 -- # json_config_test_init 00:04:37.401 11:23:14 json_config -- json_config/json_config.sh@262 -- # timing_enter json_config_test_init 00:04:37.401 11:23:14 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:37.401 11:23:14 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:37.401 11:23:14 json_config -- json_config/json_config.sh@263 -- # timing_enter json_config_setup_target 00:04:37.401 11:23:14 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:37.401 11:23:14 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:37.401 11:23:14 json_config -- json_config/json_config.sh@265 -- # json_config_test_start_app target --wait-for-rpc 00:04:37.401 11:23:14 json_config -- json_config/common.sh@9 -- # local app=target 00:04:37.401 11:23:14 json_config -- json_config/common.sh@10 -- # shift 00:04:37.401 11:23:14 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:37.401 11:23:14 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:37.401 11:23:14 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:04:37.401 11:23:14 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:37.401 11:23:14 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:37.401 11:23:14 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=61326 00:04:37.401 11:23:14 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:37.401 Waiting for target to run... 00:04:37.401 11:23:14 json_config -- json_config/common.sh@25 -- # waitforlisten 61326 /var/tmp/spdk_tgt.sock 00:04:37.401 11:23:14 json_config -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:04:37.401 11:23:14 json_config -- common/autotest_common.sh@829 -- # '[' -z 61326 ']' 00:04:37.401 11:23:14 json_config -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:37.401 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:37.401 11:23:14 json_config -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:37.401 11:23:14 json_config -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:37.401 11:23:14 json_config -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:37.401 11:23:14 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:37.401 [2024-07-15 11:23:14.843675] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:04:37.401 [2024-07-15 11:23:14.843776] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61326 ] 00:04:37.966 [2024-07-15 11:23:15.147676] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:37.966 [2024-07-15 11:23:15.210074] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:38.533 11:23:15 json_config -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:38.533 11:23:15 json_config -- common/autotest_common.sh@862 -- # return 0 00:04:38.533 11:23:15 json_config -- json_config/common.sh@26 -- # echo '' 00:04:38.533 00:04:38.533 11:23:15 json_config -- json_config/json_config.sh@269 -- # create_accel_config 00:04:38.533 11:23:15 json_config -- json_config/json_config.sh@93 -- # timing_enter create_accel_config 00:04:38.533 11:23:15 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:38.533 11:23:15 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:38.533 11:23:15 json_config -- json_config/json_config.sh@95 -- # [[ 0 -eq 1 ]] 00:04:38.533 11:23:15 json_config -- json_config/json_config.sh@101 -- # timing_exit create_accel_config 00:04:38.533 11:23:15 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:38.533 11:23:15 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:38.533 11:23:15 json_config -- json_config/json_config.sh@273 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:04:38.533 11:23:15 json_config -- json_config/json_config.sh@274 -- # tgt_rpc load_config 00:04:38.533 11:23:15 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:04:39.100 11:23:16 json_config -- json_config/json_config.sh@276 -- # tgt_check_notification_types 00:04:39.100 11:23:16 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:04:39.100 11:23:16 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:39.100 11:23:16 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:39.100 11:23:16 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:04:39.100 11:23:16 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:04:39.100 11:23:16 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:04:39.100 11:23:16 json_config -- json_config/json_config.sh@48 -- # jq -r '.[]' 00:04:39.100 11:23:16 json_config -- json_config/json_config.sh@48 -- # tgt_rpc notify_get_types 00:04:39.101 11:23:16 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:04:39.360 11:23:16 json_config -- json_config/json_config.sh@48 -- # get_types=('bdev_register' 'bdev_unregister') 00:04:39.360 11:23:16 json_config -- json_config/json_config.sh@48 -- # local get_types 00:04:39.360 11:23:16 json_config -- json_config/json_config.sh@49 -- # [[ bdev_register bdev_unregister != \b\d\e\v\_\r\e\g\i\s\t\e\r\ \b\d\e\v\_\u\n\r\e\g\i\s\t\e\r ]] 00:04:39.360 11:23:16 json_config -- json_config/json_config.sh@54 -- # timing_exit tgt_check_notification_types 00:04:39.360 11:23:16 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:39.360 11:23:16 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:39.360 11:23:16 json_config -- json_config/json_config.sh@55 -- # return 0 00:04:39.360 11:23:16 json_config -- json_config/json_config.sh@278 -- # [[ 0 -eq 1 ]] 00:04:39.360 11:23:16 json_config -- json_config/json_config.sh@282 -- # [[ 0 -eq 1 ]] 00:04:39.360 11:23:16 json_config -- json_config/json_config.sh@286 -- # [[ 0 -eq 1 ]] 00:04:39.360 11:23:16 json_config -- json_config/json_config.sh@290 -- # [[ 1 -eq 1 ]] 00:04:39.360 11:23:16 json_config -- json_config/json_config.sh@291 -- # create_nvmf_subsystem_config 00:04:39.360 11:23:16 json_config -- json_config/json_config.sh@230 -- # timing_enter create_nvmf_subsystem_config 00:04:39.360 11:23:16 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:39.360 11:23:16 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:39.360 11:23:16 json_config -- json_config/json_config.sh@232 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:04:39.360 11:23:16 json_config -- json_config/json_config.sh@233 -- # [[ tcp == \r\d\m\a ]] 00:04:39.360 11:23:16 json_config -- json_config/json_config.sh@237 -- # [[ -z 127.0.0.1 ]] 00:04:39.360 11:23:16 json_config -- json_config/json_config.sh@242 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:04:39.360 11:23:16 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:04:39.619 MallocForNvmf0 00:04:39.619 11:23:17 json_config -- json_config/json_config.sh@243 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:04:39.619 11:23:17 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:04:39.877 MallocForNvmf1 00:04:39.877 11:23:17 json_config -- json_config/json_config.sh@245 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:04:39.877 11:23:17 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:04:40.136 [2024-07-15 11:23:17.497116] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:40.136 11:23:17 json_config -- json_config/json_config.sh@246 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:04:40.136 11:23:17 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:04:40.394 11:23:17 json_config -- json_config/json_config.sh@247 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:04:40.394 11:23:17 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:04:40.652 11:23:18 json_config -- json_config/json_config.sh@248 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:04:40.652 11:23:18 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:04:40.911 11:23:18 json_config -- json_config/json_config.sh@249 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:04:40.911 11:23:18 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:04:41.170 [2024-07-15 11:23:18.605679] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:04:41.170 11:23:18 json_config -- json_config/json_config.sh@251 -- # timing_exit create_nvmf_subsystem_config 00:04:41.170 11:23:18 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:41.170 11:23:18 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:41.436 11:23:18 json_config -- json_config/json_config.sh@293 -- # timing_exit json_config_setup_target 00:04:41.436 11:23:18 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:41.436 11:23:18 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:41.436 11:23:18 json_config -- json_config/json_config.sh@295 -- # [[ 0 -eq 1 ]] 00:04:41.436 11:23:18 json_config -- json_config/json_config.sh@300 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:04:41.436 11:23:18 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:04:41.695 MallocBdevForConfigChangeCheck 00:04:41.695 11:23:18 json_config -- json_config/json_config.sh@302 -- # timing_exit json_config_test_init 00:04:41.695 11:23:18 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:41.695 11:23:18 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:41.695 11:23:18 json_config -- json_config/json_config.sh@359 -- # tgt_rpc save_config 00:04:41.695 11:23:18 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:41.954 INFO: shutting down applications... 00:04:41.954 11:23:19 json_config -- json_config/json_config.sh@361 -- # echo 'INFO: shutting down applications...' 00:04:41.954 11:23:19 json_config -- json_config/json_config.sh@362 -- # [[ 0 -eq 1 ]] 00:04:41.954 11:23:19 json_config -- json_config/json_config.sh@368 -- # json_config_clear target 00:04:41.954 11:23:19 json_config -- json_config/json_config.sh@332 -- # [[ -n 22 ]] 00:04:41.954 11:23:19 json_config -- json_config/json_config.sh@333 -- # /home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:04:42.523 Calling clear_iscsi_subsystem 00:04:42.523 Calling clear_nvmf_subsystem 00:04:42.523 Calling clear_nbd_subsystem 00:04:42.523 Calling clear_ublk_subsystem 00:04:42.523 Calling clear_vhost_blk_subsystem 00:04:42.523 Calling clear_vhost_scsi_subsystem 00:04:42.523 Calling clear_bdev_subsystem 00:04:42.523 11:23:19 json_config -- json_config/json_config.sh@337 -- # local config_filter=/home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py 00:04:42.523 11:23:19 json_config -- json_config/json_config.sh@343 -- # count=100 00:04:42.523 11:23:19 json_config -- json_config/json_config.sh@344 -- # '[' 100 -gt 0 ']' 00:04:42.523 11:23:19 json_config -- json_config/json_config.sh@345 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:42.523 11:23:19 json_config -- json_config/json_config.sh@345 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:04:42.523 11:23:19 json_config -- json_config/json_config.sh@345 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method check_empty 00:04:42.781 11:23:20 json_config -- json_config/json_config.sh@345 -- # break 00:04:42.781 11:23:20 json_config -- json_config/json_config.sh@350 -- # '[' 100 -eq 0 ']' 00:04:42.781 11:23:20 json_config -- json_config/json_config.sh@369 -- # json_config_test_shutdown_app target 00:04:42.781 11:23:20 json_config -- json_config/common.sh@31 -- # local app=target 00:04:42.781 11:23:20 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:04:42.781 11:23:20 json_config -- json_config/common.sh@35 -- # [[ -n 61326 ]] 00:04:42.781 11:23:20 json_config -- json_config/common.sh@38 -- # kill -SIGINT 61326 00:04:42.781 11:23:20 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:04:42.781 11:23:20 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:42.781 11:23:20 json_config -- json_config/common.sh@41 -- # kill -0 61326 00:04:42.781 11:23:20 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:04:43.347 11:23:20 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:04:43.347 11:23:20 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:43.347 11:23:20 json_config -- json_config/common.sh@41 -- # kill -0 61326 00:04:43.347 11:23:20 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:04:43.347 11:23:20 json_config -- json_config/common.sh@43 -- # break 00:04:43.347 11:23:20 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:04:43.347 SPDK target shutdown done 00:04:43.347 11:23:20 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:04:43.347 INFO: relaunching applications... 00:04:43.347 11:23:20 json_config -- json_config/json_config.sh@371 -- # echo 'INFO: relaunching applications...' 00:04:43.347 11:23:20 json_config -- json_config/json_config.sh@372 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:04:43.347 11:23:20 json_config -- json_config/common.sh@9 -- # local app=target 00:04:43.347 11:23:20 json_config -- json_config/common.sh@10 -- # shift 00:04:43.347 11:23:20 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:43.347 11:23:20 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:43.347 11:23:20 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:04:43.347 11:23:20 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:43.347 11:23:20 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:43.347 11:23:20 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=61606 00:04:43.347 11:23:20 json_config -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:04:43.347 Waiting for target to run... 00:04:43.347 11:23:20 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:43.347 11:23:20 json_config -- json_config/common.sh@25 -- # waitforlisten 61606 /var/tmp/spdk_tgt.sock 00:04:43.347 11:23:20 json_config -- common/autotest_common.sh@829 -- # '[' -z 61606 ']' 00:04:43.347 11:23:20 json_config -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:43.347 11:23:20 json_config -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:43.347 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:43.347 11:23:20 json_config -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:43.347 11:23:20 json_config -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:43.347 11:23:20 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:43.347 [2024-07-15 11:23:20.706515] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:04:43.347 [2024-07-15 11:23:20.706641] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61606 ] 00:04:43.604 [2024-07-15 11:23:21.011849] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:43.604 [2024-07-15 11:23:21.057305] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:44.170 [2024-07-15 11:23:21.365386] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:44.170 [2024-07-15 11:23:21.397442] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:04:44.453 11:23:21 json_config -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:44.453 11:23:21 json_config -- common/autotest_common.sh@862 -- # return 0 00:04:44.453 00:04:44.453 11:23:21 json_config -- json_config/common.sh@26 -- # echo '' 00:04:44.453 11:23:21 json_config -- json_config/json_config.sh@373 -- # [[ 0 -eq 1 ]] 00:04:44.453 11:23:21 json_config -- json_config/json_config.sh@377 -- # echo 'INFO: Checking if target configuration is the same...' 00:04:44.453 INFO: Checking if target configuration is the same... 00:04:44.453 11:23:21 json_config -- json_config/json_config.sh@378 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:04:44.453 11:23:21 json_config -- json_config/json_config.sh@378 -- # tgt_rpc save_config 00:04:44.453 11:23:21 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:44.453 + '[' 2 -ne 2 ']' 00:04:44.453 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:04:44.453 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:04:44.453 + rootdir=/home/vagrant/spdk_repo/spdk 00:04:44.453 +++ basename /dev/fd/62 00:04:44.453 ++ mktemp /tmp/62.XXX 00:04:44.453 + tmp_file_1=/tmp/62.zWV 00:04:44.453 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:04:44.453 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:04:44.453 + tmp_file_2=/tmp/spdk_tgt_config.json.Gyu 00:04:44.453 + ret=0 00:04:44.453 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:04:44.735 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:04:44.735 + diff -u /tmp/62.zWV /tmp/spdk_tgt_config.json.Gyu 00:04:44.735 INFO: JSON config files are the same 00:04:44.735 + echo 'INFO: JSON config files are the same' 00:04:44.735 + rm /tmp/62.zWV /tmp/spdk_tgt_config.json.Gyu 00:04:44.735 + exit 0 00:04:44.735 11:23:22 json_config -- json_config/json_config.sh@379 -- # [[ 0 -eq 1 ]] 00:04:44.735 INFO: changing configuration and checking if this can be detected... 00:04:44.735 11:23:22 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:04:44.735 11:23:22 json_config -- json_config/json_config.sh@386 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:04:44.735 11:23:22 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:04:44.993 11:23:22 json_config -- json_config/json_config.sh@387 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:04:44.993 11:23:22 json_config -- json_config/json_config.sh@387 -- # tgt_rpc save_config 00:04:44.993 11:23:22 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:44.993 + '[' 2 -ne 2 ']' 00:04:44.993 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:04:44.993 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:04:44.993 + rootdir=/home/vagrant/spdk_repo/spdk 00:04:44.993 +++ basename /dev/fd/62 00:04:44.993 ++ mktemp /tmp/62.XXX 00:04:44.993 + tmp_file_1=/tmp/62.JNU 00:04:44.993 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:04:44.993 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:04:44.993 + tmp_file_2=/tmp/spdk_tgt_config.json.1Zz 00:04:44.993 + ret=0 00:04:44.993 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:04:45.560 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:04:45.560 + diff -u /tmp/62.JNU /tmp/spdk_tgt_config.json.1Zz 00:04:45.560 + ret=1 00:04:45.560 + echo '=== Start of file: /tmp/62.JNU ===' 00:04:45.560 + cat /tmp/62.JNU 00:04:45.560 + echo '=== End of file: /tmp/62.JNU ===' 00:04:45.560 + echo '' 00:04:45.560 + echo '=== Start of file: /tmp/spdk_tgt_config.json.1Zz ===' 00:04:45.560 + cat /tmp/spdk_tgt_config.json.1Zz 00:04:45.560 + echo '=== End of file: /tmp/spdk_tgt_config.json.1Zz ===' 00:04:45.560 + echo '' 00:04:45.560 + rm /tmp/62.JNU /tmp/spdk_tgt_config.json.1Zz 00:04:45.560 + exit 1 00:04:45.560 INFO: configuration change detected. 00:04:45.560 11:23:22 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: configuration change detected.' 00:04:45.560 11:23:22 json_config -- json_config/json_config.sh@394 -- # json_config_test_fini 00:04:45.560 11:23:22 json_config -- json_config/json_config.sh@306 -- # timing_enter json_config_test_fini 00:04:45.560 11:23:22 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:45.560 11:23:22 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:45.560 11:23:22 json_config -- json_config/json_config.sh@307 -- # local ret=0 00:04:45.560 11:23:22 json_config -- json_config/json_config.sh@309 -- # [[ -n '' ]] 00:04:45.560 11:23:22 json_config -- json_config/json_config.sh@317 -- # [[ -n 61606 ]] 00:04:45.560 11:23:22 json_config -- json_config/json_config.sh@320 -- # cleanup_bdev_subsystem_config 00:04:45.560 11:23:22 json_config -- json_config/json_config.sh@184 -- # timing_enter cleanup_bdev_subsystem_config 00:04:45.560 11:23:22 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:45.560 11:23:22 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:45.560 11:23:22 json_config -- json_config/json_config.sh@186 -- # [[ 0 -eq 1 ]] 00:04:45.560 11:23:22 json_config -- json_config/json_config.sh@193 -- # uname -s 00:04:45.560 11:23:22 json_config -- json_config/json_config.sh@193 -- # [[ Linux = Linux ]] 00:04:45.560 11:23:22 json_config -- json_config/json_config.sh@194 -- # rm -f /sample_aio 00:04:45.560 11:23:22 json_config -- json_config/json_config.sh@197 -- # [[ 0 -eq 1 ]] 00:04:45.560 11:23:22 json_config -- json_config/json_config.sh@201 -- # timing_exit cleanup_bdev_subsystem_config 00:04:45.560 11:23:22 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:45.560 11:23:22 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:45.560 11:23:22 json_config -- json_config/json_config.sh@323 -- # killprocess 61606 00:04:45.560 11:23:22 json_config -- common/autotest_common.sh@948 -- # '[' -z 61606 ']' 00:04:45.560 11:23:22 json_config -- common/autotest_common.sh@952 -- # kill -0 61606 00:04:45.560 11:23:22 json_config -- common/autotest_common.sh@953 -- # uname 00:04:45.560 11:23:22 json_config -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:45.560 11:23:22 json_config -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 61606 00:04:45.560 11:23:22 json_config -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:45.560 11:23:22 json_config -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:45.560 killing process with pid 61606 00:04:45.560 11:23:22 json_config -- common/autotest_common.sh@966 -- # echo 'killing process with pid 61606' 00:04:45.560 11:23:22 json_config -- common/autotest_common.sh@967 -- # kill 61606 00:04:45.560 11:23:22 json_config -- common/autotest_common.sh@972 -- # wait 61606 00:04:45.818 11:23:23 json_config -- json_config/json_config.sh@326 -- # rm -f /home/vagrant/spdk_repo/spdk/spdk_initiator_config.json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:04:45.818 11:23:23 json_config -- json_config/json_config.sh@327 -- # timing_exit json_config_test_fini 00:04:45.818 11:23:23 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:45.818 11:23:23 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:45.818 11:23:23 json_config -- json_config/json_config.sh@328 -- # return 0 00:04:45.818 INFO: Success 00:04:45.818 11:23:23 json_config -- json_config/json_config.sh@396 -- # echo 'INFO: Success' 00:04:45.818 00:04:45.818 real 0m8.476s 00:04:45.818 user 0m12.507s 00:04:45.818 sys 0m1.540s 00:04:45.818 11:23:23 json_config -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:45.818 11:23:23 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:45.818 ************************************ 00:04:45.818 END TEST json_config 00:04:45.818 ************************************ 00:04:45.818 11:23:23 -- common/autotest_common.sh@1142 -- # return 0 00:04:45.818 11:23:23 -- spdk/autotest.sh@173 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:04:45.818 11:23:23 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:45.818 11:23:23 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:45.818 11:23:23 -- common/autotest_common.sh@10 -- # set +x 00:04:45.818 ************************************ 00:04:45.818 START TEST json_config_extra_key 00:04:45.818 ************************************ 00:04:45.818 11:23:23 json_config_extra_key -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:04:45.818 11:23:23 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:04:45.818 11:23:23 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:04:45.818 11:23:23 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:45.818 11:23:23 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:45.818 11:23:23 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:45.818 11:23:23 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:45.819 11:23:23 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:45.819 11:23:23 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:45.819 11:23:23 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:45.819 11:23:23 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:45.819 11:23:23 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:45.819 11:23:23 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:45.819 11:23:23 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:891080d4-f96c-4735-b9e2-e3ce9892e421 00:04:45.819 11:23:23 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=891080d4-f96c-4735-b9e2-e3ce9892e421 00:04:45.819 11:23:23 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:45.819 11:23:23 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:45.819 11:23:23 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:45.819 11:23:23 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:45.819 11:23:23 json_config_extra_key -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:45.819 11:23:23 json_config_extra_key -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:45.819 11:23:23 json_config_extra_key -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:45.819 11:23:23 json_config_extra_key -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:45.819 11:23:23 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:45.819 11:23:23 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:45.819 11:23:23 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:45.819 11:23:23 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:04:45.819 11:23:23 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:45.819 11:23:23 json_config_extra_key -- nvmf/common.sh@47 -- # : 0 00:04:45.819 11:23:23 json_config_extra_key -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:04:46.077 11:23:23 json_config_extra_key -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:04:46.077 11:23:23 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:46.077 11:23:23 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:46.077 11:23:23 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:46.077 11:23:23 json_config_extra_key -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:04:46.077 11:23:23 json_config_extra_key -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:04:46.077 11:23:23 json_config_extra_key -- nvmf/common.sh@51 -- # have_pci_nics=0 00:04:46.077 11:23:23 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:04:46.077 11:23:23 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:04:46.077 11:23:23 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:04:46.077 11:23:23 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:04:46.077 11:23:23 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:04:46.077 11:23:23 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:04:46.077 11:23:23 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:04:46.077 11:23:23 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:04:46.077 11:23:23 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:04:46.077 11:23:23 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:46.077 INFO: launching applications... 00:04:46.077 11:23:23 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:04:46.077 11:23:23 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:04:46.077 11:23:23 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:04:46.077 11:23:23 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:04:46.077 11:23:23 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:46.077 11:23:23 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:46.077 11:23:23 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:04:46.077 11:23:23 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:46.077 11:23:23 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:46.077 11:23:23 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=61782 00:04:46.077 Waiting for target to run... 00:04:46.077 11:23:23 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:46.077 11:23:23 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 61782 /var/tmp/spdk_tgt.sock 00:04:46.077 11:23:23 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:04:46.077 11:23:23 json_config_extra_key -- common/autotest_common.sh@829 -- # '[' -z 61782 ']' 00:04:46.077 11:23:23 json_config_extra_key -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:46.077 11:23:23 json_config_extra_key -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:46.077 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:46.077 11:23:23 json_config_extra_key -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:46.077 11:23:23 json_config_extra_key -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:46.077 11:23:23 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:46.077 [2024-07-15 11:23:23.356467] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:04:46.077 [2024-07-15 11:23:23.356964] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61782 ] 00:04:46.335 [2024-07-15 11:23:23.634930] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:46.335 [2024-07-15 11:23:23.681030] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:46.901 11:23:24 json_config_extra_key -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:46.901 11:23:24 json_config_extra_key -- common/autotest_common.sh@862 -- # return 0 00:04:46.901 00:04:46.901 11:23:24 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:04:46.901 INFO: shutting down applications... 00:04:46.901 11:23:24 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:04:46.901 11:23:24 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:04:46.901 11:23:24 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:04:46.901 11:23:24 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:04:46.901 11:23:24 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 61782 ]] 00:04:46.901 11:23:24 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 61782 00:04:46.901 11:23:24 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:04:46.901 11:23:24 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:46.901 11:23:24 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 61782 00:04:46.901 11:23:24 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:47.467 11:23:24 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:47.467 11:23:24 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:47.467 11:23:24 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 61782 00:04:47.467 11:23:24 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:04:47.467 11:23:24 json_config_extra_key -- json_config/common.sh@43 -- # break 00:04:47.467 SPDK target shutdown done 00:04:47.467 11:23:24 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:04:47.467 11:23:24 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:04:47.467 Success 00:04:47.467 11:23:24 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:04:47.467 00:04:47.467 real 0m1.625s 00:04:47.467 user 0m1.517s 00:04:47.467 sys 0m0.297s 00:04:47.467 11:23:24 json_config_extra_key -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:47.467 ************************************ 00:04:47.467 END TEST json_config_extra_key 00:04:47.467 11:23:24 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:47.467 ************************************ 00:04:47.467 11:23:24 -- common/autotest_common.sh@1142 -- # return 0 00:04:47.467 11:23:24 -- spdk/autotest.sh@174 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:47.467 11:23:24 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:47.467 11:23:24 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:47.467 11:23:24 -- common/autotest_common.sh@10 -- # set +x 00:04:47.467 ************************************ 00:04:47.467 START TEST alias_rpc 00:04:47.467 ************************************ 00:04:47.467 11:23:24 alias_rpc -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:47.726 * Looking for test storage... 00:04:47.726 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:04:47.726 11:23:24 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:04:47.726 11:23:24 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=61853 00:04:47.726 11:23:24 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:47.726 11:23:24 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 61853 00:04:47.726 11:23:24 alias_rpc -- common/autotest_common.sh@829 -- # '[' -z 61853 ']' 00:04:47.726 11:23:24 alias_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:47.726 11:23:24 alias_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:47.726 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:47.726 11:23:24 alias_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:47.726 11:23:24 alias_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:47.726 11:23:24 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:47.726 [2024-07-15 11:23:25.040190] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:04:47.726 [2024-07-15 11:23:25.040295] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61853 ] 00:04:47.726 [2024-07-15 11:23:25.178247] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:47.983 [2024-07-15 11:23:25.247595] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:48.549 11:23:26 alias_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:48.549 11:23:26 alias_rpc -- common/autotest_common.sh@862 -- # return 0 00:04:48.549 11:23:26 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:04:49.116 11:23:26 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 61853 00:04:49.116 11:23:26 alias_rpc -- common/autotest_common.sh@948 -- # '[' -z 61853 ']' 00:04:49.116 11:23:26 alias_rpc -- common/autotest_common.sh@952 -- # kill -0 61853 00:04:49.116 11:23:26 alias_rpc -- common/autotest_common.sh@953 -- # uname 00:04:49.116 11:23:26 alias_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:49.116 11:23:26 alias_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 61853 00:04:49.116 11:23:26 alias_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:49.116 11:23:26 alias_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:49.116 killing process with pid 61853 00:04:49.116 11:23:26 alias_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 61853' 00:04:49.116 11:23:26 alias_rpc -- common/autotest_common.sh@967 -- # kill 61853 00:04:49.116 11:23:26 alias_rpc -- common/autotest_common.sh@972 -- # wait 61853 00:04:49.375 00:04:49.375 real 0m1.697s 00:04:49.375 user 0m2.085s 00:04:49.375 sys 0m0.330s 00:04:49.375 11:23:26 alias_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:49.375 11:23:26 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:49.375 ************************************ 00:04:49.375 END TEST alias_rpc 00:04:49.375 ************************************ 00:04:49.375 11:23:26 -- common/autotest_common.sh@1142 -- # return 0 00:04:49.375 11:23:26 -- spdk/autotest.sh@176 -- # [[ 1 -eq 0 ]] 00:04:49.375 11:23:26 -- spdk/autotest.sh@180 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:49.375 11:23:26 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:49.375 11:23:26 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:49.375 11:23:26 -- common/autotest_common.sh@10 -- # set +x 00:04:49.375 ************************************ 00:04:49.375 START TEST dpdk_mem_utility 00:04:49.375 ************************************ 00:04:49.375 11:23:26 dpdk_mem_utility -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:49.375 * Looking for test storage... 00:04:49.375 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:04:49.375 11:23:26 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:04:49.375 11:23:26 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=61945 00:04:49.375 11:23:26 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:49.375 11:23:26 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 61945 00:04:49.375 11:23:26 dpdk_mem_utility -- common/autotest_common.sh@829 -- # '[' -z 61945 ']' 00:04:49.375 11:23:26 dpdk_mem_utility -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:49.375 11:23:26 dpdk_mem_utility -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:49.375 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:49.375 11:23:26 dpdk_mem_utility -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:49.375 11:23:26 dpdk_mem_utility -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:49.375 11:23:26 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:49.375 [2024-07-15 11:23:26.824934] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:04:49.375 [2024-07-15 11:23:26.825098] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61945 ] 00:04:49.634 [2024-07-15 11:23:26.972078] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:49.634 [2024-07-15 11:23:27.041988] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:50.570 11:23:27 dpdk_mem_utility -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:50.570 11:23:27 dpdk_mem_utility -- common/autotest_common.sh@862 -- # return 0 00:04:50.570 11:23:27 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:04:50.570 11:23:27 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:04:50.570 11:23:27 dpdk_mem_utility -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:50.570 11:23:27 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:50.570 { 00:04:50.570 "filename": "/tmp/spdk_mem_dump.txt" 00:04:50.570 } 00:04:50.570 11:23:27 dpdk_mem_utility -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:50.570 11:23:27 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:04:50.570 DPDK memory size 814.000000 MiB in 1 heap(s) 00:04:50.570 1 heaps totaling size 814.000000 MiB 00:04:50.570 size: 814.000000 MiB heap id: 0 00:04:50.570 end heaps---------- 00:04:50.570 8 mempools totaling size 598.116089 MiB 00:04:50.570 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:04:50.570 size: 158.602051 MiB name: PDU_data_out_Pool 00:04:50.570 size: 84.521057 MiB name: bdev_io_61945 00:04:50.570 size: 51.011292 MiB name: evtpool_61945 00:04:50.570 size: 50.003479 MiB name: msgpool_61945 00:04:50.570 size: 21.763794 MiB name: PDU_Pool 00:04:50.570 size: 19.513306 MiB name: SCSI_TASK_Pool 00:04:50.570 size: 0.026123 MiB name: Session_Pool 00:04:50.570 end mempools------- 00:04:50.570 6 memzones totaling size 4.142822 MiB 00:04:50.570 size: 1.000366 MiB name: RG_ring_0_61945 00:04:50.570 size: 1.000366 MiB name: RG_ring_1_61945 00:04:50.570 size: 1.000366 MiB name: RG_ring_4_61945 00:04:50.570 size: 1.000366 MiB name: RG_ring_5_61945 00:04:50.570 size: 0.125366 MiB name: RG_ring_2_61945 00:04:50.570 size: 0.015991 MiB name: RG_ring_3_61945 00:04:50.570 end memzones------- 00:04:50.570 11:23:27 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:04:50.570 heap id: 0 total size: 814.000000 MiB number of busy elements: 227 number of free elements: 15 00:04:50.570 list of free elements. size: 12.485291 MiB 00:04:50.570 element at address: 0x200000400000 with size: 1.999512 MiB 00:04:50.570 element at address: 0x200018e00000 with size: 0.999878 MiB 00:04:50.570 element at address: 0x200019000000 with size: 0.999878 MiB 00:04:50.570 element at address: 0x200003e00000 with size: 0.996277 MiB 00:04:50.570 element at address: 0x200031c00000 with size: 0.994446 MiB 00:04:50.570 element at address: 0x200013800000 with size: 0.978699 MiB 00:04:50.570 element at address: 0x200007000000 with size: 0.959839 MiB 00:04:50.570 element at address: 0x200019200000 with size: 0.936584 MiB 00:04:50.570 element at address: 0x200000200000 with size: 0.836853 MiB 00:04:50.570 element at address: 0x20001aa00000 with size: 0.571533 MiB 00:04:50.570 element at address: 0x20000b200000 with size: 0.489441 MiB 00:04:50.570 element at address: 0x200000800000 with size: 0.486877 MiB 00:04:50.570 element at address: 0x200019400000 with size: 0.485657 MiB 00:04:50.570 element at address: 0x200027e00000 with size: 0.398315 MiB 00:04:50.570 element at address: 0x200003a00000 with size: 0.351501 MiB 00:04:50.570 list of standard malloc elements. size: 199.252136 MiB 00:04:50.570 element at address: 0x20000b3fff80 with size: 132.000122 MiB 00:04:50.570 element at address: 0x2000071fff80 with size: 64.000122 MiB 00:04:50.570 element at address: 0x200018efff80 with size: 1.000122 MiB 00:04:50.570 element at address: 0x2000190fff80 with size: 1.000122 MiB 00:04:50.570 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:04:50.570 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:04:50.570 element at address: 0x2000192eff00 with size: 0.062622 MiB 00:04:50.570 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:04:50.570 element at address: 0x2000192efdc0 with size: 0.000305 MiB 00:04:50.570 element at address: 0x2000002d63c0 with size: 0.000183 MiB 00:04:50.570 element at address: 0x2000002d6480 with size: 0.000183 MiB 00:04:50.570 element at address: 0x2000002d6540 with size: 0.000183 MiB 00:04:50.570 element at address: 0x2000002d6600 with size: 0.000183 MiB 00:04:50.570 element at address: 0x2000002d66c0 with size: 0.000183 MiB 00:04:50.570 element at address: 0x2000002d6780 with size: 0.000183 MiB 00:04:50.570 element at address: 0x2000002d6840 with size: 0.000183 MiB 00:04:50.570 element at address: 0x2000002d6900 with size: 0.000183 MiB 00:04:50.570 element at address: 0x2000002d69c0 with size: 0.000183 MiB 00:04:50.570 element at address: 0x2000002d6a80 with size: 0.000183 MiB 00:04:50.570 element at address: 0x2000002d6b40 with size: 0.000183 MiB 00:04:50.570 element at address: 0x2000002d6c00 with size: 0.000183 MiB 00:04:50.570 element at address: 0x2000002d6cc0 with size: 0.000183 MiB 00:04:50.570 element at address: 0x2000002d6d80 with size: 0.000183 MiB 00:04:50.570 element at address: 0x2000002d6e40 with size: 0.000183 MiB 00:04:50.570 element at address: 0x2000002d6f00 with size: 0.000183 MiB 00:04:50.570 element at address: 0x2000002d6fc0 with size: 0.000183 MiB 00:04:50.570 element at address: 0x2000002d71c0 with size: 0.000183 MiB 00:04:50.570 element at address: 0x2000002d7280 with size: 0.000183 MiB 00:04:50.570 element at address: 0x2000002d7340 with size: 0.000183 MiB 00:04:50.570 element at address: 0x2000002d7400 with size: 0.000183 MiB 00:04:50.570 element at address: 0x2000002d74c0 with size: 0.000183 MiB 00:04:50.570 element at address: 0x2000002d7580 with size: 0.000183 MiB 00:04:50.570 element at address: 0x2000002d7640 with size: 0.000183 MiB 00:04:50.570 element at address: 0x2000002d7700 with size: 0.000183 MiB 00:04:50.570 element at address: 0x2000002d77c0 with size: 0.000183 MiB 00:04:50.570 element at address: 0x2000002d7880 with size: 0.000183 MiB 00:04:50.570 element at address: 0x2000002d7940 with size: 0.000183 MiB 00:04:50.570 element at address: 0x2000002d7a00 with size: 0.000183 MiB 00:04:50.570 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:04:50.570 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:04:50.570 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:04:50.571 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:04:50.571 element at address: 0x20000087ca40 with size: 0.000183 MiB 00:04:50.571 element at address: 0x20000087cb00 with size: 0.000183 MiB 00:04:50.571 element at address: 0x20000087cbc0 with size: 0.000183 MiB 00:04:50.571 element at address: 0x20000087cc80 with size: 0.000183 MiB 00:04:50.571 element at address: 0x20000087cd40 with size: 0.000183 MiB 00:04:50.571 element at address: 0x20000087ce00 with size: 0.000183 MiB 00:04:50.571 element at address: 0x20000087cec0 with size: 0.000183 MiB 00:04:50.571 element at address: 0x2000008fd180 with size: 0.000183 MiB 00:04:50.571 element at address: 0x200003a59fc0 with size: 0.000183 MiB 00:04:50.571 element at address: 0x200003a5a080 with size: 0.000183 MiB 00:04:50.571 element at address: 0x200003a5a140 with size: 0.000183 MiB 00:04:50.571 element at address: 0x200003a5a200 with size: 0.000183 MiB 00:04:50.571 element at address: 0x200003a5a2c0 with size: 0.000183 MiB 00:04:50.571 element at address: 0x200003a5a380 with size: 0.000183 MiB 00:04:50.571 element at address: 0x200003a5a440 with size: 0.000183 MiB 00:04:50.571 element at address: 0x200003a5a500 with size: 0.000183 MiB 00:04:50.571 element at address: 0x200003a5a5c0 with size: 0.000183 MiB 00:04:50.571 element at address: 0x200003a5a680 with size: 0.000183 MiB 00:04:50.571 element at address: 0x200003a5a740 with size: 0.000183 MiB 00:04:50.571 element at address: 0x200003a5a800 with size: 0.000183 MiB 00:04:50.571 element at address: 0x200003a5a8c0 with size: 0.000183 MiB 00:04:50.571 element at address: 0x200003a5a980 with size: 0.000183 MiB 00:04:50.571 element at address: 0x200003a5aa40 with size: 0.000183 MiB 00:04:50.571 element at address: 0x200003a5ab00 with size: 0.000183 MiB 00:04:50.571 element at address: 0x200003a5abc0 with size: 0.000183 MiB 00:04:50.571 element at address: 0x200003a5ac80 with size: 0.000183 MiB 00:04:50.571 element at address: 0x200003a5ad40 with size: 0.000183 MiB 00:04:50.571 element at address: 0x200003a5ae00 with size: 0.000183 MiB 00:04:50.571 element at address: 0x200003a5aec0 with size: 0.000183 MiB 00:04:50.571 element at address: 0x200003a5af80 with size: 0.000183 MiB 00:04:50.571 element at address: 0x200003a5b040 with size: 0.000183 MiB 00:04:50.571 element at address: 0x200003adb300 with size: 0.000183 MiB 00:04:50.571 element at address: 0x200003adb500 with size: 0.000183 MiB 00:04:50.571 element at address: 0x200003adf7c0 with size: 0.000183 MiB 00:04:50.571 element at address: 0x200003affa80 with size: 0.000183 MiB 00:04:50.571 element at address: 0x200003affb40 with size: 0.000183 MiB 00:04:50.571 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:04:50.571 element at address: 0x2000070fdd80 with size: 0.000183 MiB 00:04:50.571 element at address: 0x20000b27d4c0 with size: 0.000183 MiB 00:04:50.571 element at address: 0x20000b27d580 with size: 0.000183 MiB 00:04:50.571 element at address: 0x20000b27d640 with size: 0.000183 MiB 00:04:50.571 element at address: 0x20000b27d700 with size: 0.000183 MiB 00:04:50.571 element at address: 0x20000b27d7c0 with size: 0.000183 MiB 00:04:50.571 element at address: 0x20000b27d880 with size: 0.000183 MiB 00:04:50.571 element at address: 0x20000b27d940 with size: 0.000183 MiB 00:04:50.571 element at address: 0x20000b27da00 with size: 0.000183 MiB 00:04:50.571 element at address: 0x20000b27dac0 with size: 0.000183 MiB 00:04:50.571 element at address: 0x20000b2fdd80 with size: 0.000183 MiB 00:04:50.571 element at address: 0x2000138fa8c0 with size: 0.000183 MiB 00:04:50.571 element at address: 0x2000192efc40 with size: 0.000183 MiB 00:04:50.571 element at address: 0x2000192efd00 with size: 0.000183 MiB 00:04:50.571 element at address: 0x2000194bc740 with size: 0.000183 MiB 00:04:50.571 element at address: 0x20001aa92500 with size: 0.000183 MiB 00:04:50.571 element at address: 0x20001aa925c0 with size: 0.000183 MiB 00:04:50.571 element at address: 0x20001aa92680 with size: 0.000183 MiB 00:04:50.571 element at address: 0x20001aa92740 with size: 0.000183 MiB 00:04:50.571 element at address: 0x20001aa92800 with size: 0.000183 MiB 00:04:50.571 element at address: 0x20001aa928c0 with size: 0.000183 MiB 00:04:50.571 element at address: 0x20001aa92980 with size: 0.000183 MiB 00:04:50.571 element at address: 0x20001aa92a40 with size: 0.000183 MiB 00:04:50.571 element at address: 0x20001aa92b00 with size: 0.000183 MiB 00:04:50.571 element at address: 0x20001aa92bc0 with size: 0.000183 MiB 00:04:50.571 element at address: 0x20001aa92c80 with size: 0.000183 MiB 00:04:50.571 element at address: 0x20001aa92d40 with size: 0.000183 MiB 00:04:50.571 element at address: 0x20001aa92e00 with size: 0.000183 MiB 00:04:50.571 element at address: 0x20001aa92ec0 with size: 0.000183 MiB 00:04:50.571 element at address: 0x20001aa92f80 with size: 0.000183 MiB 00:04:50.571 element at address: 0x20001aa93040 with size: 0.000183 MiB 00:04:50.571 element at address: 0x20001aa93100 with size: 0.000183 MiB 00:04:50.571 element at address: 0x20001aa931c0 with size: 0.000183 MiB 00:04:50.571 element at address: 0x20001aa93280 with size: 0.000183 MiB 00:04:50.571 element at address: 0x20001aa93340 with size: 0.000183 MiB 00:04:50.571 element at address: 0x20001aa93400 with size: 0.000183 MiB 00:04:50.571 element at address: 0x20001aa934c0 with size: 0.000183 MiB 00:04:50.571 element at address: 0x20001aa93580 with size: 0.000183 MiB 00:04:50.571 element at address: 0x20001aa93640 with size: 0.000183 MiB 00:04:50.571 element at address: 0x20001aa93700 with size: 0.000183 MiB 00:04:50.571 element at address: 0x20001aa937c0 with size: 0.000183 MiB 00:04:50.571 element at address: 0x20001aa93880 with size: 0.000183 MiB 00:04:50.571 element at address: 0x20001aa93940 with size: 0.000183 MiB 00:04:50.571 element at address: 0x20001aa93a00 with size: 0.000183 MiB 00:04:50.571 element at address: 0x20001aa93ac0 with size: 0.000183 MiB 00:04:50.571 element at address: 0x20001aa93b80 with size: 0.000183 MiB 00:04:50.571 element at address: 0x20001aa93c40 with size: 0.000183 MiB 00:04:50.571 element at address: 0x20001aa93d00 with size: 0.000183 MiB 00:04:50.571 element at address: 0x20001aa93dc0 with size: 0.000183 MiB 00:04:50.571 element at address: 0x20001aa93e80 with size: 0.000183 MiB 00:04:50.571 element at address: 0x20001aa93f40 with size: 0.000183 MiB 00:04:50.571 element at address: 0x20001aa94000 with size: 0.000183 MiB 00:04:50.571 element at address: 0x20001aa940c0 with size: 0.000183 MiB 00:04:50.571 element at address: 0x20001aa94180 with size: 0.000183 MiB 00:04:50.571 element at address: 0x20001aa94240 with size: 0.000183 MiB 00:04:50.571 element at address: 0x20001aa94300 with size: 0.000183 MiB 00:04:50.571 element at address: 0x20001aa943c0 with size: 0.000183 MiB 00:04:50.571 element at address: 0x20001aa94480 with size: 0.000183 MiB 00:04:50.571 element at address: 0x20001aa94540 with size: 0.000183 MiB 00:04:50.571 element at address: 0x20001aa94600 with size: 0.000183 MiB 00:04:50.571 element at address: 0x20001aa946c0 with size: 0.000183 MiB 00:04:50.571 element at address: 0x20001aa94780 with size: 0.000183 MiB 00:04:50.571 element at address: 0x20001aa94840 with size: 0.000183 MiB 00:04:50.571 element at address: 0x20001aa94900 with size: 0.000183 MiB 00:04:50.571 element at address: 0x20001aa949c0 with size: 0.000183 MiB 00:04:50.571 element at address: 0x20001aa94a80 with size: 0.000183 MiB 00:04:50.571 element at address: 0x20001aa94b40 with size: 0.000183 MiB 00:04:50.571 element at address: 0x20001aa94c00 with size: 0.000183 MiB 00:04:50.571 element at address: 0x20001aa94cc0 with size: 0.000183 MiB 00:04:50.571 element at address: 0x20001aa94d80 with size: 0.000183 MiB 00:04:50.571 element at address: 0x20001aa94e40 with size: 0.000183 MiB 00:04:50.571 element at address: 0x20001aa94f00 with size: 0.000183 MiB 00:04:50.571 element at address: 0x20001aa94fc0 with size: 0.000183 MiB 00:04:50.571 element at address: 0x20001aa95080 with size: 0.000183 MiB 00:04:50.571 element at address: 0x20001aa95140 with size: 0.000183 MiB 00:04:50.571 element at address: 0x20001aa95200 with size: 0.000183 MiB 00:04:50.571 element at address: 0x20001aa952c0 with size: 0.000183 MiB 00:04:50.571 element at address: 0x20001aa95380 with size: 0.000183 MiB 00:04:50.571 element at address: 0x20001aa95440 with size: 0.000183 MiB 00:04:50.571 element at address: 0x200027e65f80 with size: 0.000183 MiB 00:04:50.571 element at address: 0x200027e66040 with size: 0.000183 MiB 00:04:50.571 element at address: 0x200027e6cc40 with size: 0.000183 MiB 00:04:50.571 element at address: 0x200027e6ce40 with size: 0.000183 MiB 00:04:50.571 element at address: 0x200027e6cf00 with size: 0.000183 MiB 00:04:50.571 element at address: 0x200027e6cfc0 with size: 0.000183 MiB 00:04:50.571 element at address: 0x200027e6d080 with size: 0.000183 MiB 00:04:50.571 element at address: 0x200027e6d140 with size: 0.000183 MiB 00:04:50.571 element at address: 0x200027e6d200 with size: 0.000183 MiB 00:04:50.571 element at address: 0x200027e6d2c0 with size: 0.000183 MiB 00:04:50.571 element at address: 0x200027e6d380 with size: 0.000183 MiB 00:04:50.571 element at address: 0x200027e6d440 with size: 0.000183 MiB 00:04:50.571 element at address: 0x200027e6d500 with size: 0.000183 MiB 00:04:50.571 element at address: 0x200027e6d5c0 with size: 0.000183 MiB 00:04:50.571 element at address: 0x200027e6d680 with size: 0.000183 MiB 00:04:50.571 element at address: 0x200027e6d740 with size: 0.000183 MiB 00:04:50.571 element at address: 0x200027e6d800 with size: 0.000183 MiB 00:04:50.571 element at address: 0x200027e6d8c0 with size: 0.000183 MiB 00:04:50.571 element at address: 0x200027e6d980 with size: 0.000183 MiB 00:04:50.571 element at address: 0x200027e6da40 with size: 0.000183 MiB 00:04:50.571 element at address: 0x200027e6db00 with size: 0.000183 MiB 00:04:50.571 element at address: 0x200027e6dbc0 with size: 0.000183 MiB 00:04:50.571 element at address: 0x200027e6dc80 with size: 0.000183 MiB 00:04:50.571 element at address: 0x200027e6dd40 with size: 0.000183 MiB 00:04:50.571 element at address: 0x200027e6de00 with size: 0.000183 MiB 00:04:50.571 element at address: 0x200027e6dec0 with size: 0.000183 MiB 00:04:50.571 element at address: 0x200027e6df80 with size: 0.000183 MiB 00:04:50.571 element at address: 0x200027e6e040 with size: 0.000183 MiB 00:04:50.571 element at address: 0x200027e6e100 with size: 0.000183 MiB 00:04:50.571 element at address: 0x200027e6e1c0 with size: 0.000183 MiB 00:04:50.571 element at address: 0x200027e6e280 with size: 0.000183 MiB 00:04:50.571 element at address: 0x200027e6e340 with size: 0.000183 MiB 00:04:50.571 element at address: 0x200027e6e400 with size: 0.000183 MiB 00:04:50.571 element at address: 0x200027e6e4c0 with size: 0.000183 MiB 00:04:50.571 element at address: 0x200027e6e580 with size: 0.000183 MiB 00:04:50.571 element at address: 0x200027e6e640 with size: 0.000183 MiB 00:04:50.571 element at address: 0x200027e6e700 with size: 0.000183 MiB 00:04:50.571 element at address: 0x200027e6e7c0 with size: 0.000183 MiB 00:04:50.571 element at address: 0x200027e6e880 with size: 0.000183 MiB 00:04:50.571 element at address: 0x200027e6e940 with size: 0.000183 MiB 00:04:50.571 element at address: 0x200027e6ea00 with size: 0.000183 MiB 00:04:50.571 element at address: 0x200027e6eac0 with size: 0.000183 MiB 00:04:50.571 element at address: 0x200027e6eb80 with size: 0.000183 MiB 00:04:50.571 element at address: 0x200027e6ec40 with size: 0.000183 MiB 00:04:50.571 element at address: 0x200027e6ed00 with size: 0.000183 MiB 00:04:50.571 element at address: 0x200027e6edc0 with size: 0.000183 MiB 00:04:50.572 element at address: 0x200027e6ee80 with size: 0.000183 MiB 00:04:50.572 element at address: 0x200027e6ef40 with size: 0.000183 MiB 00:04:50.572 element at address: 0x200027e6f000 with size: 0.000183 MiB 00:04:50.572 element at address: 0x200027e6f0c0 with size: 0.000183 MiB 00:04:50.572 element at address: 0x200027e6f180 with size: 0.000183 MiB 00:04:50.572 element at address: 0x200027e6f240 with size: 0.000183 MiB 00:04:50.572 element at address: 0x200027e6f300 with size: 0.000183 MiB 00:04:50.572 element at address: 0x200027e6f3c0 with size: 0.000183 MiB 00:04:50.572 element at address: 0x200027e6f480 with size: 0.000183 MiB 00:04:50.572 element at address: 0x200027e6f540 with size: 0.000183 MiB 00:04:50.572 element at address: 0x200027e6f600 with size: 0.000183 MiB 00:04:50.572 element at address: 0x200027e6f6c0 with size: 0.000183 MiB 00:04:50.572 element at address: 0x200027e6f780 with size: 0.000183 MiB 00:04:50.572 element at address: 0x200027e6f840 with size: 0.000183 MiB 00:04:50.572 element at address: 0x200027e6f900 with size: 0.000183 MiB 00:04:50.572 element at address: 0x200027e6f9c0 with size: 0.000183 MiB 00:04:50.572 element at address: 0x200027e6fa80 with size: 0.000183 MiB 00:04:50.572 element at address: 0x200027e6fb40 with size: 0.000183 MiB 00:04:50.572 element at address: 0x200027e6fc00 with size: 0.000183 MiB 00:04:50.572 element at address: 0x200027e6fcc0 with size: 0.000183 MiB 00:04:50.572 element at address: 0x200027e6fd80 with size: 0.000183 MiB 00:04:50.572 element at address: 0x200027e6fe40 with size: 0.000183 MiB 00:04:50.572 element at address: 0x200027e6ff00 with size: 0.000183 MiB 00:04:50.572 list of memzone associated elements. size: 602.262573 MiB 00:04:50.572 element at address: 0x20001aa95500 with size: 211.416748 MiB 00:04:50.572 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:04:50.572 element at address: 0x200027e6ffc0 with size: 157.562561 MiB 00:04:50.572 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:04:50.572 element at address: 0x2000139fab80 with size: 84.020630 MiB 00:04:50.572 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_61945_0 00:04:50.572 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:04:50.572 associated memzone info: size: 48.002930 MiB name: MP_evtpool_61945_0 00:04:50.572 element at address: 0x200003fff380 with size: 48.003052 MiB 00:04:50.572 associated memzone info: size: 48.002930 MiB name: MP_msgpool_61945_0 00:04:50.572 element at address: 0x2000195be940 with size: 20.255554 MiB 00:04:50.572 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:04:50.572 element at address: 0x200031dfeb40 with size: 18.005066 MiB 00:04:50.572 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:04:50.572 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:04:50.572 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_61945 00:04:50.572 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:04:50.572 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_61945 00:04:50.572 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:04:50.572 associated memzone info: size: 1.007996 MiB name: MP_evtpool_61945 00:04:50.572 element at address: 0x20000b2fde40 with size: 1.008118 MiB 00:04:50.572 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:04:50.572 element at address: 0x2000194bc800 with size: 1.008118 MiB 00:04:50.572 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:04:50.572 element at address: 0x2000070fde40 with size: 1.008118 MiB 00:04:50.572 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:04:50.572 element at address: 0x2000008fd240 with size: 1.008118 MiB 00:04:50.572 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:04:50.572 element at address: 0x200003eff180 with size: 1.000488 MiB 00:04:50.572 associated memzone info: size: 1.000366 MiB name: RG_ring_0_61945 00:04:50.572 element at address: 0x200003affc00 with size: 1.000488 MiB 00:04:50.572 associated memzone info: size: 1.000366 MiB name: RG_ring_1_61945 00:04:50.572 element at address: 0x2000138fa980 with size: 1.000488 MiB 00:04:50.572 associated memzone info: size: 1.000366 MiB name: RG_ring_4_61945 00:04:50.572 element at address: 0x200031cfe940 with size: 1.000488 MiB 00:04:50.572 associated memzone info: size: 1.000366 MiB name: RG_ring_5_61945 00:04:50.572 element at address: 0x200003a5b100 with size: 0.500488 MiB 00:04:50.572 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_61945 00:04:50.572 element at address: 0x20000b27db80 with size: 0.500488 MiB 00:04:50.572 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:04:50.572 element at address: 0x20000087cf80 with size: 0.500488 MiB 00:04:50.572 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:04:50.572 element at address: 0x20001947c540 with size: 0.250488 MiB 00:04:50.572 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:04:50.572 element at address: 0x200003adf880 with size: 0.125488 MiB 00:04:50.572 associated memzone info: size: 0.125366 MiB name: RG_ring_2_61945 00:04:50.572 element at address: 0x2000070f5b80 with size: 0.031738 MiB 00:04:50.572 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:04:50.572 element at address: 0x200027e66100 with size: 0.023743 MiB 00:04:50.572 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:04:50.572 element at address: 0x200003adb5c0 with size: 0.016113 MiB 00:04:50.572 associated memzone info: size: 0.015991 MiB name: RG_ring_3_61945 00:04:50.572 element at address: 0x200027e6c240 with size: 0.002441 MiB 00:04:50.572 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:04:50.572 element at address: 0x2000002d7080 with size: 0.000305 MiB 00:04:50.572 associated memzone info: size: 0.000183 MiB name: MP_msgpool_61945 00:04:50.572 element at address: 0x200003adb3c0 with size: 0.000305 MiB 00:04:50.572 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_61945 00:04:50.572 element at address: 0x200027e6cd00 with size: 0.000305 MiB 00:04:50.572 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:04:50.572 11:23:27 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:04:50.572 11:23:27 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 61945 00:04:50.572 11:23:27 dpdk_mem_utility -- common/autotest_common.sh@948 -- # '[' -z 61945 ']' 00:04:50.572 11:23:27 dpdk_mem_utility -- common/autotest_common.sh@952 -- # kill -0 61945 00:04:50.572 11:23:27 dpdk_mem_utility -- common/autotest_common.sh@953 -- # uname 00:04:50.572 11:23:27 dpdk_mem_utility -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:50.572 11:23:27 dpdk_mem_utility -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 61945 00:04:50.572 11:23:27 dpdk_mem_utility -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:50.572 killing process with pid 61945 00:04:50.572 11:23:27 dpdk_mem_utility -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:50.572 11:23:27 dpdk_mem_utility -- common/autotest_common.sh@966 -- # echo 'killing process with pid 61945' 00:04:50.572 11:23:27 dpdk_mem_utility -- common/autotest_common.sh@967 -- # kill 61945 00:04:50.572 11:23:27 dpdk_mem_utility -- common/autotest_common.sh@972 -- # wait 61945 00:04:50.830 00:04:50.830 real 0m1.574s 00:04:50.830 user 0m1.828s 00:04:50.830 sys 0m0.352s 00:04:50.830 11:23:28 dpdk_mem_utility -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:50.830 ************************************ 00:04:50.830 11:23:28 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:50.830 END TEST dpdk_mem_utility 00:04:50.830 ************************************ 00:04:50.830 11:23:28 -- common/autotest_common.sh@1142 -- # return 0 00:04:50.831 11:23:28 -- spdk/autotest.sh@181 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:04:50.831 11:23:28 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:50.831 11:23:28 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:50.831 11:23:28 -- common/autotest_common.sh@10 -- # set +x 00:04:50.831 ************************************ 00:04:50.831 START TEST event 00:04:50.831 ************************************ 00:04:50.831 11:23:28 event -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:04:51.089 * Looking for test storage... 00:04:51.089 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:04:51.089 11:23:28 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:04:51.089 11:23:28 event -- bdev/nbd_common.sh@6 -- # set -e 00:04:51.089 11:23:28 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:51.089 11:23:28 event -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:04:51.089 11:23:28 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:51.089 11:23:28 event -- common/autotest_common.sh@10 -- # set +x 00:04:51.089 ************************************ 00:04:51.089 START TEST event_perf 00:04:51.089 ************************************ 00:04:51.089 11:23:28 event.event_perf -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:51.089 Running I/O for 1 seconds...[2024-07-15 11:23:28.376834] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:04:51.089 [2024-07-15 11:23:28.376934] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62035 ] 00:04:51.089 [2024-07-15 11:23:28.515981] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:04:51.348 [2024-07-15 11:23:28.578234] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:04:51.348 [2024-07-15 11:23:28.578370] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:04:51.348 [2024-07-15 11:23:28.578477] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:51.348 [2024-07-15 11:23:28.578476] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:04:52.314 Running I/O for 1 seconds... 00:04:52.314 lcore 0: 193153 00:04:52.314 lcore 1: 193153 00:04:52.314 lcore 2: 193153 00:04:52.314 lcore 3: 193152 00:04:52.314 done. 00:04:52.314 00:04:52.314 real 0m1.294s 00:04:52.314 user 0m4.114s 00:04:52.314 sys 0m0.056s 00:04:52.314 11:23:29 event.event_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:52.314 11:23:29 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:04:52.314 ************************************ 00:04:52.314 END TEST event_perf 00:04:52.314 ************************************ 00:04:52.314 11:23:29 event -- common/autotest_common.sh@1142 -- # return 0 00:04:52.315 11:23:29 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:04:52.315 11:23:29 event -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:04:52.315 11:23:29 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:52.315 11:23:29 event -- common/autotest_common.sh@10 -- # set +x 00:04:52.315 ************************************ 00:04:52.315 START TEST event_reactor 00:04:52.315 ************************************ 00:04:52.315 11:23:29 event.event_reactor -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:04:52.315 [2024-07-15 11:23:29.719038] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:04:52.315 [2024-07-15 11:23:29.719129] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62073 ] 00:04:52.572 [2024-07-15 11:23:29.854639] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:52.572 [2024-07-15 11:23:29.912985] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:53.944 test_start 00:04:53.944 oneshot 00:04:53.944 tick 100 00:04:53.944 tick 100 00:04:53.944 tick 250 00:04:53.944 tick 100 00:04:53.944 tick 100 00:04:53.944 tick 100 00:04:53.944 tick 250 00:04:53.944 tick 500 00:04:53.944 tick 100 00:04:53.944 tick 100 00:04:53.944 tick 250 00:04:53.944 tick 100 00:04:53.944 tick 100 00:04:53.944 test_end 00:04:53.944 00:04:53.944 real 0m1.281s 00:04:53.944 user 0m1.135s 00:04:53.944 sys 0m0.040s 00:04:53.945 11:23:30 event.event_reactor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:53.945 ************************************ 00:04:53.945 11:23:30 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:04:53.945 END TEST event_reactor 00:04:53.945 ************************************ 00:04:53.945 11:23:31 event -- common/autotest_common.sh@1142 -- # return 0 00:04:53.945 11:23:31 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:04:53.945 11:23:31 event -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:04:53.945 11:23:31 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:53.945 11:23:31 event -- common/autotest_common.sh@10 -- # set +x 00:04:53.945 ************************************ 00:04:53.945 START TEST event_reactor_perf 00:04:53.945 ************************************ 00:04:53.945 11:23:31 event.event_reactor_perf -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:04:53.945 [2024-07-15 11:23:31.050929] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:04:53.945 [2024-07-15 11:23:31.051042] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62103 ] 00:04:53.945 [2024-07-15 11:23:31.184918] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:53.945 [2024-07-15 11:23:31.245610] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:54.877 test_start 00:04:54.877 test_end 00:04:54.877 Performance: 345762 events per second 00:04:54.877 00:04:54.877 real 0m1.283s 00:04:54.877 user 0m1.142s 00:04:54.877 sys 0m0.032s 00:04:54.877 11:23:32 event.event_reactor_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:54.877 ************************************ 00:04:54.877 END TEST event_reactor_perf 00:04:54.877 11:23:32 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:04:54.877 ************************************ 00:04:54.877 11:23:32 event -- common/autotest_common.sh@1142 -- # return 0 00:04:54.877 11:23:32 event -- event/event.sh@49 -- # uname -s 00:04:55.135 11:23:32 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:04:55.135 11:23:32 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:04:55.135 11:23:32 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:55.135 11:23:32 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:55.135 11:23:32 event -- common/autotest_common.sh@10 -- # set +x 00:04:55.135 ************************************ 00:04:55.135 START TEST event_scheduler 00:04:55.135 ************************************ 00:04:55.135 11:23:32 event.event_scheduler -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:04:55.135 * Looking for test storage... 00:04:55.135 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:04:55.135 11:23:32 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:04:55.135 11:23:32 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=62165 00:04:55.135 11:23:32 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:04:55.135 11:23:32 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:04:55.135 11:23:32 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 62165 00:04:55.135 11:23:32 event.event_scheduler -- common/autotest_common.sh@829 -- # '[' -z 62165 ']' 00:04:55.135 11:23:32 event.event_scheduler -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:55.135 11:23:32 event.event_scheduler -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:55.135 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:55.135 11:23:32 event.event_scheduler -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:55.135 11:23:32 event.event_scheduler -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:55.135 11:23:32 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:55.135 [2024-07-15 11:23:32.512912] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:04:55.135 [2024-07-15 11:23:32.513739] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62165 ] 00:04:55.392 [2024-07-15 11:23:32.660243] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:04:55.392 [2024-07-15 11:23:32.724954] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:55.392 [2024-07-15 11:23:32.725040] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:04:55.392 [2024-07-15 11:23:32.725134] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:04:55.392 [2024-07-15 11:23:32.725142] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:04:56.325 11:23:33 event.event_scheduler -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:56.325 11:23:33 event.event_scheduler -- common/autotest_common.sh@862 -- # return 0 00:04:56.325 11:23:33 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:04:56.325 11:23:33 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:56.325 11:23:33 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:56.325 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:04:56.325 POWER: Cannot set governor of lcore 0 to userspace 00:04:56.325 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:04:56.325 POWER: Cannot set governor of lcore 0 to performance 00:04:56.325 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:04:56.325 POWER: Cannot set governor of lcore 0 to userspace 00:04:56.325 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:04:56.325 POWER: Cannot set governor of lcore 0 to userspace 00:04:56.325 GUEST_CHANNEL: Opening channel '/dev/virtio-ports/virtio.serial.port.poweragent.0' for lcore 0 00:04:56.325 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:04:56.325 POWER: Unable to set Power Management Environment for lcore 0 00:04:56.325 [2024-07-15 11:23:33.486879] dpdk_governor.c: 130:_init_core: *ERROR*: Failed to initialize on core0 00:04:56.325 [2024-07-15 11:23:33.486893] dpdk_governor.c: 191:_init: *ERROR*: Failed to initialize on core0 00:04:56.325 [2024-07-15 11:23:33.486902] scheduler_dynamic.c: 270:init: *NOTICE*: Unable to initialize dpdk governor 00:04:56.325 [2024-07-15 11:23:33.486914] scheduler_dynamic.c: 416:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:04:56.325 [2024-07-15 11:23:33.486921] scheduler_dynamic.c: 418:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:04:56.325 [2024-07-15 11:23:33.486929] scheduler_dynamic.c: 420:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:04:56.325 11:23:33 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:56.325 11:23:33 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:04:56.325 11:23:33 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:56.325 11:23:33 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:56.325 [2024-07-15 11:23:33.541445] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:04:56.325 11:23:33 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:56.325 11:23:33 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:04:56.325 11:23:33 event.event_scheduler -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:56.325 11:23:33 event.event_scheduler -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:56.325 11:23:33 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:56.325 ************************************ 00:04:56.325 START TEST scheduler_create_thread 00:04:56.325 ************************************ 00:04:56.325 11:23:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1123 -- # scheduler_create_thread 00:04:56.325 11:23:33 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:04:56.325 11:23:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:56.326 11:23:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:56.326 2 00:04:56.326 11:23:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:56.326 11:23:33 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:04:56.326 11:23:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:56.326 11:23:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:56.326 3 00:04:56.326 11:23:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:56.326 11:23:33 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:04:56.326 11:23:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:56.326 11:23:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:56.326 4 00:04:56.326 11:23:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:56.326 11:23:33 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:04:56.326 11:23:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:56.326 11:23:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:56.326 5 00:04:56.326 11:23:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:56.326 11:23:33 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:04:56.326 11:23:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:56.326 11:23:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:56.326 6 00:04:56.326 11:23:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:56.326 11:23:33 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:04:56.326 11:23:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:56.326 11:23:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:56.326 7 00:04:56.326 11:23:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:56.326 11:23:33 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:04:56.326 11:23:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:56.326 11:23:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:56.326 8 00:04:56.326 11:23:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:56.326 11:23:33 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:04:56.326 11:23:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:56.326 11:23:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:56.326 9 00:04:56.326 11:23:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:56.326 11:23:33 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:04:56.326 11:23:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:56.326 11:23:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:56.326 10 00:04:56.326 11:23:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:56.326 11:23:33 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:04:56.326 11:23:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:56.326 11:23:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:56.326 11:23:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:56.326 11:23:33 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:04:56.326 11:23:33 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:04:56.326 11:23:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:56.326 11:23:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:56.326 11:23:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:56.326 11:23:33 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:04:56.326 11:23:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:56.326 11:23:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:56.326 11:23:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:56.326 11:23:33 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:04:56.326 11:23:33 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:04:56.326 11:23:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:56.326 11:23:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:57.261 11:23:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:57.261 00:04:57.261 real 0m1.170s 00:04:57.261 user 0m0.011s 00:04:57.261 sys 0m0.009s 00:04:57.261 11:23:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:57.261 ************************************ 00:04:57.261 END TEST scheduler_create_thread 00:04:57.261 11:23:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:57.261 ************************************ 00:04:57.520 11:23:34 event.event_scheduler -- common/autotest_common.sh@1142 -- # return 0 00:04:57.521 11:23:34 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:04:57.521 11:23:34 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 62165 00:04:57.521 11:23:34 event.event_scheduler -- common/autotest_common.sh@948 -- # '[' -z 62165 ']' 00:04:57.521 11:23:34 event.event_scheduler -- common/autotest_common.sh@952 -- # kill -0 62165 00:04:57.521 11:23:34 event.event_scheduler -- common/autotest_common.sh@953 -- # uname 00:04:57.521 11:23:34 event.event_scheduler -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:57.521 11:23:34 event.event_scheduler -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 62165 00:04:57.521 11:23:34 event.event_scheduler -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:04:57.521 killing process with pid 62165 00:04:57.521 11:23:34 event.event_scheduler -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:04:57.521 11:23:34 event.event_scheduler -- common/autotest_common.sh@966 -- # echo 'killing process with pid 62165' 00:04:57.521 11:23:34 event.event_scheduler -- common/autotest_common.sh@967 -- # kill 62165 00:04:57.521 11:23:34 event.event_scheduler -- common/autotest_common.sh@972 -- # wait 62165 00:04:57.779 [2024-07-15 11:23:35.199792] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:04:58.038 00:04:58.038 real 0m3.002s 00:04:58.038 user 0m5.581s 00:04:58.038 sys 0m0.298s 00:04:58.038 ************************************ 00:04:58.038 END TEST event_scheduler 00:04:58.038 ************************************ 00:04:58.038 11:23:35 event.event_scheduler -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:58.038 11:23:35 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:58.038 11:23:35 event -- common/autotest_common.sh@1142 -- # return 0 00:04:58.038 11:23:35 event -- event/event.sh@51 -- # modprobe -n nbd 00:04:58.038 11:23:35 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:04:58.038 11:23:35 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:58.038 11:23:35 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:58.038 11:23:35 event -- common/autotest_common.sh@10 -- # set +x 00:04:58.038 ************************************ 00:04:58.038 START TEST app_repeat 00:04:58.038 ************************************ 00:04:58.038 11:23:35 event.app_repeat -- common/autotest_common.sh@1123 -- # app_repeat_test 00:04:58.038 11:23:35 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:58.038 11:23:35 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:58.038 11:23:35 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:04:58.038 11:23:35 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:58.038 11:23:35 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:04:58.038 11:23:35 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:04:58.038 11:23:35 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:04:58.038 11:23:35 event.app_repeat -- event/event.sh@19 -- # repeat_pid=62266 00:04:58.038 11:23:35 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:04:58.038 Process app_repeat pid: 62266 00:04:58.038 11:23:35 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 62266' 00:04:58.038 11:23:35 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:04:58.038 11:23:35 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:04:58.038 spdk_app_start Round 0 00:04:58.038 11:23:35 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:04:58.038 11:23:35 event.app_repeat -- event/event.sh@25 -- # waitforlisten 62266 /var/tmp/spdk-nbd.sock 00:04:58.038 11:23:35 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 62266 ']' 00:04:58.038 11:23:35 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:58.038 11:23:35 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:58.038 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:58.038 11:23:35 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:58.038 11:23:35 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:58.038 11:23:35 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:58.038 [2024-07-15 11:23:35.449774] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:04:58.039 [2024-07-15 11:23:35.449869] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62266 ] 00:04:58.297 [2024-07-15 11:23:35.584607] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:58.297 [2024-07-15 11:23:35.646063] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:04:58.297 [2024-07-15 11:23:35.646071] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:58.297 11:23:35 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:58.297 11:23:35 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:04:58.297 11:23:35 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:58.556 Malloc0 00:04:58.556 11:23:36 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:58.813 Malloc1 00:04:59.072 11:23:36 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:59.072 11:23:36 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:59.072 11:23:36 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:59.072 11:23:36 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:59.072 11:23:36 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:59.072 11:23:36 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:59.072 11:23:36 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:59.072 11:23:36 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:59.072 11:23:36 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:59.072 11:23:36 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:59.072 11:23:36 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:59.072 11:23:36 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:59.072 11:23:36 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:04:59.072 11:23:36 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:59.072 11:23:36 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:59.072 11:23:36 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:59.072 /dev/nbd0 00:04:59.331 11:23:36 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:59.331 11:23:36 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:59.331 11:23:36 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:04:59.331 11:23:36 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:04:59.331 11:23:36 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:04:59.331 11:23:36 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:04:59.331 11:23:36 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:04:59.331 11:23:36 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:04:59.331 11:23:36 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:04:59.331 11:23:36 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:04:59.331 11:23:36 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:59.331 1+0 records in 00:04:59.331 1+0 records out 00:04:59.331 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000231558 s, 17.7 MB/s 00:04:59.331 11:23:36 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:59.331 11:23:36 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:04:59.331 11:23:36 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:59.331 11:23:36 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:04:59.331 11:23:36 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:04:59.331 11:23:36 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:59.331 11:23:36 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:59.331 11:23:36 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:59.589 /dev/nbd1 00:04:59.589 11:23:36 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:59.589 11:23:36 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:59.589 11:23:36 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:04:59.589 11:23:36 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:04:59.589 11:23:36 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:04:59.589 11:23:36 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:04:59.589 11:23:36 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:04:59.589 11:23:36 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:04:59.589 11:23:36 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:04:59.589 11:23:36 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:04:59.589 11:23:36 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:59.589 1+0 records in 00:04:59.589 1+0 records out 00:04:59.589 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000260901 s, 15.7 MB/s 00:04:59.589 11:23:36 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:59.589 11:23:36 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:04:59.589 11:23:36 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:59.589 11:23:36 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:04:59.589 11:23:36 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:04:59.589 11:23:36 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:59.589 11:23:36 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:59.589 11:23:36 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:59.589 11:23:36 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:59.589 11:23:36 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:59.889 11:23:37 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:59.889 { 00:04:59.889 "bdev_name": "Malloc0", 00:04:59.889 "nbd_device": "/dev/nbd0" 00:04:59.889 }, 00:04:59.889 { 00:04:59.889 "bdev_name": "Malloc1", 00:04:59.889 "nbd_device": "/dev/nbd1" 00:04:59.889 } 00:04:59.889 ]' 00:04:59.889 11:23:37 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:59.889 { 00:04:59.889 "bdev_name": "Malloc0", 00:04:59.889 "nbd_device": "/dev/nbd0" 00:04:59.889 }, 00:04:59.889 { 00:04:59.889 "bdev_name": "Malloc1", 00:04:59.889 "nbd_device": "/dev/nbd1" 00:04:59.889 } 00:04:59.889 ]' 00:04:59.889 11:23:37 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:59.889 11:23:37 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:59.889 /dev/nbd1' 00:04:59.889 11:23:37 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:59.889 11:23:37 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:59.889 /dev/nbd1' 00:04:59.889 11:23:37 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:04:59.889 11:23:37 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:04:59.889 11:23:37 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:04:59.889 11:23:37 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:59.889 11:23:37 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:59.889 11:23:37 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:59.889 11:23:37 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:59.889 11:23:37 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:59.889 11:23:37 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:04:59.889 11:23:37 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:59.889 11:23:37 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:59.889 256+0 records in 00:04:59.889 256+0 records out 00:04:59.889 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0093479 s, 112 MB/s 00:04:59.889 11:23:37 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:59.889 11:23:37 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:59.889 256+0 records in 00:04:59.889 256+0 records out 00:04:59.889 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0264205 s, 39.7 MB/s 00:04:59.889 11:23:37 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:59.889 11:23:37 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:59.889 256+0 records in 00:04:59.889 256+0 records out 00:04:59.889 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0316489 s, 33.1 MB/s 00:04:59.889 11:23:37 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:59.889 11:23:37 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:59.889 11:23:37 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:59.889 11:23:37 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:59.889 11:23:37 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:04:59.889 11:23:37 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:59.889 11:23:37 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:59.889 11:23:37 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:59.889 11:23:37 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:04:59.889 11:23:37 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:59.889 11:23:37 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:04:59.889 11:23:37 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:04:59.889 11:23:37 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:59.889 11:23:37 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:59.889 11:23:37 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:59.889 11:23:37 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:59.889 11:23:37 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:04:59.889 11:23:37 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:59.889 11:23:37 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:00.456 11:23:37 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:00.456 11:23:37 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:00.456 11:23:37 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:00.456 11:23:37 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:00.456 11:23:37 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:00.456 11:23:37 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:00.456 11:23:37 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:00.456 11:23:37 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:00.456 11:23:37 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:00.456 11:23:37 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:00.714 11:23:37 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:00.714 11:23:37 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:00.714 11:23:37 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:00.714 11:23:37 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:00.714 11:23:37 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:00.714 11:23:37 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:00.714 11:23:37 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:00.714 11:23:37 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:00.714 11:23:37 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:00.714 11:23:37 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:00.714 11:23:37 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:00.973 11:23:38 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:00.973 11:23:38 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:00.973 11:23:38 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:00.973 11:23:38 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:00.973 11:23:38 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:00.973 11:23:38 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:00.973 11:23:38 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:00.973 11:23:38 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:00.973 11:23:38 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:00.973 11:23:38 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:00.973 11:23:38 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:00.973 11:23:38 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:00.973 11:23:38 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:01.232 11:23:38 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:01.232 [2024-07-15 11:23:38.702635] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:01.491 [2024-07-15 11:23:38.761919] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:01.491 [2024-07-15 11:23:38.761926] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:01.491 [2024-07-15 11:23:38.792014] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:01.491 [2024-07-15 11:23:38.792077] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:04.775 11:23:41 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:04.775 spdk_app_start Round 1 00:05:04.775 11:23:41 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:05:04.775 11:23:41 event.app_repeat -- event/event.sh@25 -- # waitforlisten 62266 /var/tmp/spdk-nbd.sock 00:05:04.775 11:23:41 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 62266 ']' 00:05:04.775 11:23:41 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:04.775 11:23:41 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:04.775 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:04.775 11:23:41 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:04.775 11:23:41 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:04.775 11:23:41 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:04.775 11:23:41 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:04.775 11:23:41 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:05:04.775 11:23:41 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:04.775 Malloc0 00:05:04.775 11:23:42 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:05.033 Malloc1 00:05:05.033 11:23:42 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:05.033 11:23:42 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:05.033 11:23:42 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:05.033 11:23:42 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:05.033 11:23:42 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:05.033 11:23:42 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:05.033 11:23:42 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:05.033 11:23:42 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:05.033 11:23:42 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:05.033 11:23:42 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:05.033 11:23:42 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:05.033 11:23:42 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:05.033 11:23:42 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:05.033 11:23:42 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:05.033 11:23:42 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:05.033 11:23:42 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:05.291 /dev/nbd0 00:05:05.291 11:23:42 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:05.291 11:23:42 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:05.291 11:23:42 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:05:05.291 11:23:42 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:05:05.291 11:23:42 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:05.291 11:23:42 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:05.291 11:23:42 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:05:05.550 11:23:42 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:05:05.550 11:23:42 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:05.550 11:23:42 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:05.550 11:23:42 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:05.550 1+0 records in 00:05:05.550 1+0 records out 00:05:05.550 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000272633 s, 15.0 MB/s 00:05:05.550 11:23:42 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:05.550 11:23:42 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:05:05.550 11:23:42 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:05.550 11:23:42 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:05.550 11:23:42 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:05:05.550 11:23:42 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:05.550 11:23:42 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:05.550 11:23:42 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:05.810 /dev/nbd1 00:05:05.810 11:23:43 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:05.810 11:23:43 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:05.810 11:23:43 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:05:05.810 11:23:43 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:05:05.810 11:23:43 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:05.810 11:23:43 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:05.810 11:23:43 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:05:05.810 11:23:43 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:05:05.810 11:23:43 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:05.810 11:23:43 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:05.810 11:23:43 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:05.810 1+0 records in 00:05:05.810 1+0 records out 00:05:05.810 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000270585 s, 15.1 MB/s 00:05:05.810 11:23:43 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:05.810 11:23:43 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:05:05.810 11:23:43 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:05.810 11:23:43 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:05.810 11:23:43 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:05:05.810 11:23:43 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:05.810 11:23:43 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:05.810 11:23:43 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:05.810 11:23:43 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:05.810 11:23:43 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:06.067 11:23:43 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:06.067 { 00:05:06.067 "bdev_name": "Malloc0", 00:05:06.067 "nbd_device": "/dev/nbd0" 00:05:06.067 }, 00:05:06.067 { 00:05:06.067 "bdev_name": "Malloc1", 00:05:06.067 "nbd_device": "/dev/nbd1" 00:05:06.067 } 00:05:06.067 ]' 00:05:06.067 11:23:43 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:06.067 { 00:05:06.067 "bdev_name": "Malloc0", 00:05:06.067 "nbd_device": "/dev/nbd0" 00:05:06.067 }, 00:05:06.067 { 00:05:06.067 "bdev_name": "Malloc1", 00:05:06.067 "nbd_device": "/dev/nbd1" 00:05:06.067 } 00:05:06.067 ]' 00:05:06.067 11:23:43 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:06.067 11:23:43 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:06.067 /dev/nbd1' 00:05:06.067 11:23:43 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:06.067 /dev/nbd1' 00:05:06.067 11:23:43 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:06.067 11:23:43 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:06.067 11:23:43 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:06.067 11:23:43 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:06.067 11:23:43 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:06.067 11:23:43 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:06.067 11:23:43 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:06.067 11:23:43 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:06.067 11:23:43 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:06.067 11:23:43 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:06.067 11:23:43 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:06.067 11:23:43 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:06.067 256+0 records in 00:05:06.067 256+0 records out 00:05:06.067 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00593185 s, 177 MB/s 00:05:06.067 11:23:43 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:06.067 11:23:43 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:06.067 256+0 records in 00:05:06.067 256+0 records out 00:05:06.067 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0257416 s, 40.7 MB/s 00:05:06.067 11:23:43 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:06.067 11:23:43 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:06.067 256+0 records in 00:05:06.067 256+0 records out 00:05:06.067 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0286161 s, 36.6 MB/s 00:05:06.067 11:23:43 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:06.067 11:23:43 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:06.067 11:23:43 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:06.068 11:23:43 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:06.068 11:23:43 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:06.068 11:23:43 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:06.068 11:23:43 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:06.068 11:23:43 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:06.068 11:23:43 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:06.068 11:23:43 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:06.068 11:23:43 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:06.326 11:23:43 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:06.326 11:23:43 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:06.326 11:23:43 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:06.326 11:23:43 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:06.326 11:23:43 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:06.326 11:23:43 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:06.326 11:23:43 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:06.326 11:23:43 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:06.584 11:23:43 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:06.584 11:23:43 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:06.584 11:23:43 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:06.584 11:23:43 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:06.584 11:23:43 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:06.584 11:23:43 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:06.584 11:23:43 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:06.584 11:23:43 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:06.584 11:23:43 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:06.584 11:23:43 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:06.841 11:23:44 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:06.841 11:23:44 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:06.841 11:23:44 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:06.841 11:23:44 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:06.841 11:23:44 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:06.841 11:23:44 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:06.841 11:23:44 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:06.841 11:23:44 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:06.841 11:23:44 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:06.841 11:23:44 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:06.841 11:23:44 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:07.100 11:23:44 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:07.100 11:23:44 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:07.100 11:23:44 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:07.100 11:23:44 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:07.100 11:23:44 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:07.100 11:23:44 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:07.100 11:23:44 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:07.100 11:23:44 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:07.100 11:23:44 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:07.100 11:23:44 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:07.100 11:23:44 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:07.100 11:23:44 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:07.100 11:23:44 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:07.358 11:23:44 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:07.616 [2024-07-15 11:23:44.901685] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:07.616 [2024-07-15 11:23:44.961885] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:07.616 [2024-07-15 11:23:44.961897] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:07.616 [2024-07-15 11:23:44.993967] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:07.617 [2024-07-15 11:23:44.994030] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:10.915 spdk_app_start Round 2 00:05:10.915 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:10.916 11:23:47 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:10.916 11:23:47 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:05:10.916 11:23:47 event.app_repeat -- event/event.sh@25 -- # waitforlisten 62266 /var/tmp/spdk-nbd.sock 00:05:10.916 11:23:47 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 62266 ']' 00:05:10.916 11:23:47 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:10.916 11:23:47 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:10.916 11:23:47 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:10.916 11:23:47 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:10.916 11:23:47 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:10.916 11:23:48 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:10.916 11:23:48 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:05:10.916 11:23:48 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:10.916 Malloc0 00:05:10.916 11:23:48 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:11.174 Malloc1 00:05:11.174 11:23:48 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:11.174 11:23:48 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:11.174 11:23:48 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:11.174 11:23:48 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:11.174 11:23:48 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:11.174 11:23:48 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:11.174 11:23:48 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:11.174 11:23:48 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:11.174 11:23:48 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:11.174 11:23:48 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:11.174 11:23:48 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:11.174 11:23:48 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:11.174 11:23:48 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:11.174 11:23:48 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:11.174 11:23:48 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:11.174 11:23:48 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:11.433 /dev/nbd0 00:05:11.433 11:23:48 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:11.433 11:23:48 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:11.433 11:23:48 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:05:11.433 11:23:48 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:05:11.433 11:23:48 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:11.433 11:23:48 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:11.433 11:23:48 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:05:11.433 11:23:48 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:05:11.433 11:23:48 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:11.433 11:23:48 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:11.433 11:23:48 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:11.433 1+0 records in 00:05:11.433 1+0 records out 00:05:11.433 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000256885 s, 15.9 MB/s 00:05:11.433 11:23:48 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:11.433 11:23:48 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:05:11.433 11:23:48 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:11.433 11:23:48 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:11.433 11:23:48 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:05:11.433 11:23:48 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:11.433 11:23:48 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:11.433 11:23:48 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:12.001 /dev/nbd1 00:05:12.001 11:23:49 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:12.001 11:23:49 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:12.001 11:23:49 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:05:12.002 11:23:49 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:05:12.002 11:23:49 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:12.002 11:23:49 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:12.002 11:23:49 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:05:12.002 11:23:49 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:05:12.002 11:23:49 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:12.002 11:23:49 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:12.002 11:23:49 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:12.002 1+0 records in 00:05:12.002 1+0 records out 00:05:12.002 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000305221 s, 13.4 MB/s 00:05:12.002 11:23:49 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:12.002 11:23:49 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:05:12.002 11:23:49 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:12.002 11:23:49 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:12.002 11:23:49 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:05:12.002 11:23:49 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:12.002 11:23:49 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:12.002 11:23:49 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:12.002 11:23:49 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:12.002 11:23:49 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:12.260 11:23:49 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:12.260 { 00:05:12.260 "bdev_name": "Malloc0", 00:05:12.260 "nbd_device": "/dev/nbd0" 00:05:12.260 }, 00:05:12.260 { 00:05:12.260 "bdev_name": "Malloc1", 00:05:12.260 "nbd_device": "/dev/nbd1" 00:05:12.260 } 00:05:12.260 ]' 00:05:12.260 11:23:49 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:12.260 { 00:05:12.260 "bdev_name": "Malloc0", 00:05:12.260 "nbd_device": "/dev/nbd0" 00:05:12.260 }, 00:05:12.260 { 00:05:12.260 "bdev_name": "Malloc1", 00:05:12.260 "nbd_device": "/dev/nbd1" 00:05:12.260 } 00:05:12.260 ]' 00:05:12.260 11:23:49 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:12.260 11:23:49 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:12.260 /dev/nbd1' 00:05:12.260 11:23:49 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:12.260 /dev/nbd1' 00:05:12.260 11:23:49 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:12.260 11:23:49 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:12.260 11:23:49 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:12.260 11:23:49 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:12.260 11:23:49 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:12.260 11:23:49 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:12.260 11:23:49 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:12.260 11:23:49 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:12.260 11:23:49 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:12.260 11:23:49 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:12.260 11:23:49 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:12.260 11:23:49 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:12.260 256+0 records in 00:05:12.260 256+0 records out 00:05:12.260 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00836637 s, 125 MB/s 00:05:12.261 11:23:49 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:12.261 11:23:49 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:12.261 256+0 records in 00:05:12.261 256+0 records out 00:05:12.261 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0260824 s, 40.2 MB/s 00:05:12.261 11:23:49 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:12.261 11:23:49 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:12.261 256+0 records in 00:05:12.261 256+0 records out 00:05:12.261 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0276982 s, 37.9 MB/s 00:05:12.261 11:23:49 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:12.261 11:23:49 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:12.261 11:23:49 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:12.261 11:23:49 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:12.261 11:23:49 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:12.261 11:23:49 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:12.261 11:23:49 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:12.261 11:23:49 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:12.261 11:23:49 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:12.261 11:23:49 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:12.261 11:23:49 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:12.261 11:23:49 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:12.261 11:23:49 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:12.261 11:23:49 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:12.261 11:23:49 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:12.261 11:23:49 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:12.261 11:23:49 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:12.261 11:23:49 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:12.261 11:23:49 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:12.519 11:23:49 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:12.519 11:23:49 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:12.519 11:23:49 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:12.519 11:23:49 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:12.519 11:23:49 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:12.519 11:23:49 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:12.519 11:23:49 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:12.519 11:23:49 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:12.519 11:23:49 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:12.519 11:23:49 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:12.777 11:23:50 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:12.777 11:23:50 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:12.777 11:23:50 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:12.777 11:23:50 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:12.777 11:23:50 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:12.777 11:23:50 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:12.777 11:23:50 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:12.777 11:23:50 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:12.777 11:23:50 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:12.777 11:23:50 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:12.777 11:23:50 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:13.344 11:23:50 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:13.344 11:23:50 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:13.344 11:23:50 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:13.344 11:23:50 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:13.344 11:23:50 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:13.344 11:23:50 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:13.344 11:23:50 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:13.344 11:23:50 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:13.344 11:23:50 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:13.344 11:23:50 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:13.344 11:23:50 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:13.344 11:23:50 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:13.344 11:23:50 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:13.602 11:23:50 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:13.602 [2024-07-15 11:23:50.978209] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:13.602 [2024-07-15 11:23:51.039070] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:13.602 [2024-07-15 11:23:51.039081] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:13.602 [2024-07-15 11:23:51.070803] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:13.602 [2024-07-15 11:23:51.070870] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:16.881 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:16.881 11:23:53 event.app_repeat -- event/event.sh@38 -- # waitforlisten 62266 /var/tmp/spdk-nbd.sock 00:05:16.881 11:23:53 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 62266 ']' 00:05:16.881 11:23:53 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:16.881 11:23:53 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:16.881 11:23:53 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:16.881 11:23:53 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:16.881 11:23:53 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:16.881 11:23:54 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:16.881 11:23:54 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:05:16.881 11:23:54 event.app_repeat -- event/event.sh@39 -- # killprocess 62266 00:05:16.881 11:23:54 event.app_repeat -- common/autotest_common.sh@948 -- # '[' -z 62266 ']' 00:05:16.881 11:23:54 event.app_repeat -- common/autotest_common.sh@952 -- # kill -0 62266 00:05:16.881 11:23:54 event.app_repeat -- common/autotest_common.sh@953 -- # uname 00:05:16.881 11:23:54 event.app_repeat -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:16.881 11:23:54 event.app_repeat -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 62266 00:05:16.881 killing process with pid 62266 00:05:16.881 11:23:54 event.app_repeat -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:16.881 11:23:54 event.app_repeat -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:16.881 11:23:54 event.app_repeat -- common/autotest_common.sh@966 -- # echo 'killing process with pid 62266' 00:05:16.881 11:23:54 event.app_repeat -- common/autotest_common.sh@967 -- # kill 62266 00:05:16.881 11:23:54 event.app_repeat -- common/autotest_common.sh@972 -- # wait 62266 00:05:16.881 spdk_app_start is called in Round 0. 00:05:16.881 Shutdown signal received, stop current app iteration 00:05:16.881 Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 reinitialization... 00:05:16.881 spdk_app_start is called in Round 1. 00:05:16.881 Shutdown signal received, stop current app iteration 00:05:16.881 Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 reinitialization... 00:05:16.881 spdk_app_start is called in Round 2. 00:05:16.881 Shutdown signal received, stop current app iteration 00:05:16.881 Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 reinitialization... 00:05:16.881 spdk_app_start is called in Round 3. 00:05:16.881 Shutdown signal received, stop current app iteration 00:05:16.881 11:23:54 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:05:16.881 11:23:54 event.app_repeat -- event/event.sh@42 -- # return 0 00:05:16.881 00:05:16.881 real 0m18.903s 00:05:16.881 user 0m43.061s 00:05:16.881 sys 0m2.912s 00:05:16.881 11:23:54 event.app_repeat -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:16.881 11:23:54 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:16.881 ************************************ 00:05:16.881 END TEST app_repeat 00:05:16.881 ************************************ 00:05:17.139 11:23:54 event -- common/autotest_common.sh@1142 -- # return 0 00:05:17.139 11:23:54 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:05:17.139 11:23:54 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:05:17.139 11:23:54 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:17.139 11:23:54 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:17.139 11:23:54 event -- common/autotest_common.sh@10 -- # set +x 00:05:17.139 ************************************ 00:05:17.139 START TEST cpu_locks 00:05:17.139 ************************************ 00:05:17.139 11:23:54 event.cpu_locks -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:05:17.139 * Looking for test storage... 00:05:17.139 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:05:17.139 11:23:54 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:05:17.139 11:23:54 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:05:17.139 11:23:54 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:05:17.139 11:23:54 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:05:17.139 11:23:54 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:17.139 11:23:54 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:17.139 11:23:54 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:17.139 ************************************ 00:05:17.139 START TEST default_locks 00:05:17.139 ************************************ 00:05:17.139 11:23:54 event.cpu_locks.default_locks -- common/autotest_common.sh@1123 -- # default_locks 00:05:17.139 11:23:54 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=62877 00:05:17.139 11:23:54 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 62877 00:05:17.139 11:23:54 event.cpu_locks.default_locks -- common/autotest_common.sh@829 -- # '[' -z 62877 ']' 00:05:17.139 11:23:54 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:17.139 11:23:54 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:17.139 11:23:54 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:17.139 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:17.139 11:23:54 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:17.139 11:23:54 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:17.139 11:23:54 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:17.139 [2024-07-15 11:23:54.539473] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:05:17.139 [2024-07-15 11:23:54.539608] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62877 ] 00:05:17.397 [2024-07-15 11:23:54.678025] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:17.397 [2024-07-15 11:23:54.752754] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:18.330 11:23:55 event.cpu_locks.default_locks -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:18.330 11:23:55 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # return 0 00:05:18.330 11:23:55 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 62877 00:05:18.330 11:23:55 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 62877 00:05:18.330 11:23:55 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:18.587 11:23:56 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 62877 00:05:18.587 11:23:56 event.cpu_locks.default_locks -- common/autotest_common.sh@948 -- # '[' -z 62877 ']' 00:05:18.587 11:23:56 event.cpu_locks.default_locks -- common/autotest_common.sh@952 -- # kill -0 62877 00:05:18.587 11:23:56 event.cpu_locks.default_locks -- common/autotest_common.sh@953 -- # uname 00:05:18.587 11:23:56 event.cpu_locks.default_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:18.587 11:23:56 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 62877 00:05:18.843 11:23:56 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:18.843 11:23:56 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:18.843 killing process with pid 62877 00:05:18.843 11:23:56 event.cpu_locks.default_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 62877' 00:05:18.843 11:23:56 event.cpu_locks.default_locks -- common/autotest_common.sh@967 -- # kill 62877 00:05:18.843 11:23:56 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # wait 62877 00:05:19.106 11:23:56 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 62877 00:05:19.106 11:23:56 event.cpu_locks.default_locks -- common/autotest_common.sh@648 -- # local es=0 00:05:19.106 11:23:56 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 62877 00:05:19.106 11:23:56 event.cpu_locks.default_locks -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:05:19.106 11:23:56 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:19.106 11:23:56 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:05:19.106 11:23:56 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:19.106 11:23:56 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # waitforlisten 62877 00:05:19.106 11:23:56 event.cpu_locks.default_locks -- common/autotest_common.sh@829 -- # '[' -z 62877 ']' 00:05:19.106 11:23:56 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:19.106 11:23:56 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:19.106 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:19.106 11:23:56 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:19.106 11:23:56 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:19.106 11:23:56 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:19.106 ERROR: process (pid: 62877) is no longer running 00:05:19.106 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 844: kill: (62877) - No such process 00:05:19.106 11:23:56 event.cpu_locks.default_locks -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:19.106 11:23:56 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # return 1 00:05:19.106 11:23:56 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # es=1 00:05:19.106 11:23:56 event.cpu_locks.default_locks -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:19.106 11:23:56 event.cpu_locks.default_locks -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:19.106 11:23:56 event.cpu_locks.default_locks -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:19.106 11:23:56 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:05:19.106 11:23:56 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:19.106 11:23:56 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:05:19.106 11:23:56 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:19.106 00:05:19.106 real 0m1.872s 00:05:19.106 user 0m2.164s 00:05:19.106 sys 0m0.513s 00:05:19.106 11:23:56 event.cpu_locks.default_locks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:19.106 11:23:56 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:19.106 ************************************ 00:05:19.106 END TEST default_locks 00:05:19.106 ************************************ 00:05:19.106 11:23:56 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:05:19.106 11:23:56 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:05:19.106 11:23:56 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:19.106 11:23:56 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:19.106 11:23:56 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:19.106 ************************************ 00:05:19.106 START TEST default_locks_via_rpc 00:05:19.106 ************************************ 00:05:19.106 11:23:56 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1123 -- # default_locks_via_rpc 00:05:19.106 11:23:56 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=62941 00:05:19.106 11:23:56 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:19.106 11:23:56 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 62941 00:05:19.106 11:23:56 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 62941 ']' 00:05:19.106 11:23:56 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:19.106 11:23:56 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:19.106 11:23:56 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:19.106 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:19.106 11:23:56 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:19.106 11:23:56 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:19.107 [2024-07-15 11:23:56.460235] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:05:19.107 [2024-07-15 11:23:56.460330] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62941 ] 00:05:19.363 [2024-07-15 11:23:56.593147] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:19.363 [2024-07-15 11:23:56.675728] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:20.291 11:23:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:20.291 11:23:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:05:20.291 11:23:57 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:05:20.292 11:23:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:20.292 11:23:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:20.292 11:23:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:20.292 11:23:57 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:05:20.292 11:23:57 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:20.292 11:23:57 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:05:20.292 11:23:57 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:20.292 11:23:57 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:05:20.292 11:23:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:20.292 11:23:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:20.292 11:23:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:20.292 11:23:57 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 62941 00:05:20.292 11:23:57 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 62941 00:05:20.292 11:23:57 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:20.549 11:23:57 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 62941 00:05:20.549 11:23:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@948 -- # '[' -z 62941 ']' 00:05:20.549 11:23:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@952 -- # kill -0 62941 00:05:20.549 11:23:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@953 -- # uname 00:05:20.549 11:23:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:20.549 11:23:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 62941 00:05:20.549 11:23:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:20.549 11:23:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:20.549 killing process with pid 62941 00:05:20.549 11:23:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 62941' 00:05:20.549 11:23:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@967 -- # kill 62941 00:05:20.549 11:23:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # wait 62941 00:05:20.807 00:05:20.807 real 0m1.747s 00:05:20.807 user 0m2.032s 00:05:20.807 sys 0m0.449s 00:05:20.807 11:23:58 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:20.807 11:23:58 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:20.807 ************************************ 00:05:20.807 END TEST default_locks_via_rpc 00:05:20.807 ************************************ 00:05:20.807 11:23:58 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:05:20.807 11:23:58 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:05:20.807 11:23:58 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:20.807 11:23:58 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:20.807 11:23:58 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:20.807 ************************************ 00:05:20.807 START TEST non_locking_app_on_locked_coremask 00:05:20.807 ************************************ 00:05:20.807 11:23:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1123 -- # non_locking_app_on_locked_coremask 00:05:20.807 11:23:58 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=63010 00:05:20.807 11:23:58 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 63010 /var/tmp/spdk.sock 00:05:20.808 11:23:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 63010 ']' 00:05:20.808 11:23:58 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:20.808 11:23:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:20.808 11:23:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:20.808 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:20.808 11:23:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:20.808 11:23:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:20.808 11:23:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:20.808 [2024-07-15 11:23:58.268822] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:05:20.808 [2024-07-15 11:23:58.268949] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63010 ] 00:05:21.066 [2024-07-15 11:23:58.408517] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:21.066 [2024-07-15 11:23:58.495138] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:21.999 11:23:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:21.999 11:23:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:05:21.999 11:23:59 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=63038 00:05:21.999 11:23:59 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:05:21.999 11:23:59 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 63038 /var/tmp/spdk2.sock 00:05:21.999 11:23:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 63038 ']' 00:05:21.999 11:23:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:21.999 11:23:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:21.999 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:21.999 11:23:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:21.999 11:23:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:21.999 11:23:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:21.999 [2024-07-15 11:23:59.306706] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:05:21.999 [2024-07-15 11:23:59.306822] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63038 ] 00:05:21.999 [2024-07-15 11:23:59.458023] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:21.999 [2024-07-15 11:23:59.458114] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:22.256 [2024-07-15 11:23:59.625613] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:23.189 11:24:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:23.189 11:24:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:05:23.189 11:24:00 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 63010 00:05:23.189 11:24:00 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 63010 00:05:23.189 11:24:00 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:23.754 11:24:01 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 63010 00:05:23.754 11:24:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 63010 ']' 00:05:23.754 11:24:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 63010 00:05:23.754 11:24:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:05:23.754 11:24:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:23.754 11:24:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 63010 00:05:23.754 killing process with pid 63010 00:05:23.754 11:24:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:23.754 11:24:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:23.754 11:24:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 63010' 00:05:23.754 11:24:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 63010 00:05:23.754 11:24:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 63010 00:05:24.319 11:24:01 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 63038 00:05:24.319 11:24:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 63038 ']' 00:05:24.319 11:24:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 63038 00:05:24.319 11:24:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:05:24.319 11:24:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:24.319 11:24:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 63038 00:05:24.319 killing process with pid 63038 00:05:24.319 11:24:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:24.319 11:24:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:24.319 11:24:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 63038' 00:05:24.319 11:24:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 63038 00:05:24.319 11:24:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 63038 00:05:24.578 ************************************ 00:05:24.578 END TEST non_locking_app_on_locked_coremask 00:05:24.578 ************************************ 00:05:24.578 00:05:24.578 real 0m3.746s 00:05:24.578 user 0m4.427s 00:05:24.578 sys 0m0.930s 00:05:24.578 11:24:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:24.578 11:24:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:24.578 11:24:01 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:05:24.578 11:24:01 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:05:24.578 11:24:01 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:24.578 11:24:01 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:24.578 11:24:01 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:24.578 ************************************ 00:05:24.578 START TEST locking_app_on_unlocked_coremask 00:05:24.578 ************************************ 00:05:24.578 11:24:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1123 -- # locking_app_on_unlocked_coremask 00:05:24.578 11:24:02 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=63117 00:05:24.578 11:24:02 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:05:24.578 11:24:02 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 63117 /var/tmp/spdk.sock 00:05:24.578 11:24:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@829 -- # '[' -z 63117 ']' 00:05:24.578 11:24:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:24.578 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:24.578 11:24:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:24.578 11:24:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:24.578 11:24:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:24.578 11:24:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:24.837 [2024-07-15 11:24:02.112460] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:05:24.837 [2024-07-15 11:24:02.112732] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63117 ] 00:05:24.837 [2024-07-15 11:24:02.263886] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:24.837 [2024-07-15 11:24:02.264046] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:25.096 [2024-07-15 11:24:02.328990] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:25.096 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:25.096 11:24:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:25.096 11:24:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # return 0 00:05:25.096 11:24:02 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=63126 00:05:25.096 11:24:02 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 63126 /var/tmp/spdk2.sock 00:05:25.096 11:24:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@829 -- # '[' -z 63126 ']' 00:05:25.096 11:24:02 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:25.096 11:24:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:25.096 11:24:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:25.096 11:24:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:25.096 11:24:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:25.096 11:24:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:25.096 [2024-07-15 11:24:02.563084] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:05:25.096 [2024-07-15 11:24:02.563570] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63126 ] 00:05:25.354 [2024-07-15 11:24:02.720299] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:25.612 [2024-07-15 11:24:02.896428] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:26.184 11:24:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:26.184 11:24:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # return 0 00:05:26.184 11:24:03 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 63126 00:05:26.184 11:24:03 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 63126 00:05:26.184 11:24:03 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:27.118 11:24:04 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 63117 00:05:27.118 11:24:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@948 -- # '[' -z 63117 ']' 00:05:27.118 11:24:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # kill -0 63117 00:05:27.118 11:24:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # uname 00:05:27.118 11:24:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:27.118 11:24:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 63117 00:05:27.118 killing process with pid 63117 00:05:27.118 11:24:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:27.118 11:24:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:27.118 11:24:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 63117' 00:05:27.118 11:24:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@967 -- # kill 63117 00:05:27.118 11:24:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # wait 63117 00:05:27.376 11:24:04 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 63126 00:05:27.376 11:24:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@948 -- # '[' -z 63126 ']' 00:05:27.376 11:24:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # kill -0 63126 00:05:27.376 11:24:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # uname 00:05:27.376 11:24:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:27.376 11:24:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 63126 00:05:27.634 11:24:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:27.634 killing process with pid 63126 00:05:27.634 11:24:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:27.634 11:24:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 63126' 00:05:27.634 11:24:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@967 -- # kill 63126 00:05:27.634 11:24:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # wait 63126 00:05:27.634 00:05:27.634 real 0m3.106s 00:05:27.634 user 0m3.565s 00:05:27.634 sys 0m0.950s 00:05:27.634 11:24:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:27.634 11:24:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:27.634 ************************************ 00:05:27.634 END TEST locking_app_on_unlocked_coremask 00:05:27.634 ************************************ 00:05:27.892 11:24:05 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:05:27.892 11:24:05 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:05:27.892 11:24:05 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:27.892 11:24:05 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:27.892 11:24:05 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:27.892 ************************************ 00:05:27.892 START TEST locking_app_on_locked_coremask 00:05:27.892 ************************************ 00:05:27.892 11:24:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1123 -- # locking_app_on_locked_coremask 00:05:27.892 11:24:05 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=63205 00:05:27.892 11:24:05 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 63205 /var/tmp/spdk.sock 00:05:27.892 11:24:05 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:27.892 11:24:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 63205 ']' 00:05:27.892 11:24:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:27.892 11:24:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:27.892 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:27.892 11:24:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:27.892 11:24:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:27.892 11:24:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:27.892 [2024-07-15 11:24:05.208482] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:05:27.892 [2024-07-15 11:24:05.208588] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63205 ] 00:05:27.892 [2024-07-15 11:24:05.339018] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:28.150 [2024-07-15 11:24:05.413222] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:28.750 11:24:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:28.750 11:24:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:05:28.750 11:24:06 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=63233 00:05:28.750 11:24:06 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 63233 /var/tmp/spdk2.sock 00:05:28.750 11:24:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@648 -- # local es=0 00:05:28.750 11:24:06 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:28.750 11:24:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 63233 /var/tmp/spdk2.sock 00:05:28.750 11:24:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:05:28.750 11:24:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:28.750 11:24:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:05:28.750 11:24:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:28.750 11:24:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # waitforlisten 63233 /var/tmp/spdk2.sock 00:05:28.750 11:24:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 63233 ']' 00:05:28.750 11:24:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:28.750 11:24:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:28.750 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:28.750 11:24:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:28.750 11:24:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:28.750 11:24:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:29.019 [2024-07-15 11:24:06.242420] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:05:29.019 [2024-07-15 11:24:06.242537] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63233 ] 00:05:29.019 [2024-07-15 11:24:06.392499] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 63205 has claimed it. 00:05:29.019 [2024-07-15 11:24:06.392600] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:29.586 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 844: kill: (63233) - No such process 00:05:29.586 ERROR: process (pid: 63233) is no longer running 00:05:29.586 11:24:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:29.586 11:24:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 1 00:05:29.586 11:24:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # es=1 00:05:29.586 11:24:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:29.586 11:24:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:29.586 11:24:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:29.586 11:24:06 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 63205 00:05:29.586 11:24:06 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 63205 00:05:29.586 11:24:06 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:29.845 11:24:07 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 63205 00:05:29.845 11:24:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 63205 ']' 00:05:29.845 11:24:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 63205 00:05:29.845 11:24:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:05:29.845 11:24:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:29.845 11:24:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 63205 00:05:29.845 11:24:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:29.845 killing process with pid 63205 00:05:29.845 11:24:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:29.845 11:24:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 63205' 00:05:29.845 11:24:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 63205 00:05:29.845 11:24:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 63205 00:05:30.104 00:05:30.104 real 0m2.393s 00:05:30.104 user 0m2.874s 00:05:30.104 sys 0m0.518s 00:05:30.104 11:24:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:30.104 11:24:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:30.104 ************************************ 00:05:30.104 END TEST locking_app_on_locked_coremask 00:05:30.104 ************************************ 00:05:30.104 11:24:07 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:05:30.104 11:24:07 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:05:30.104 11:24:07 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:30.104 11:24:07 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:30.104 11:24:07 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:30.363 ************************************ 00:05:30.363 START TEST locking_overlapped_coremask 00:05:30.363 ************************************ 00:05:30.363 11:24:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1123 -- # locking_overlapped_coremask 00:05:30.363 11:24:07 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=63279 00:05:30.363 11:24:07 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 63279 /var/tmp/spdk.sock 00:05:30.363 11:24:07 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:05:30.363 11:24:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@829 -- # '[' -z 63279 ']' 00:05:30.363 11:24:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:30.363 11:24:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:30.363 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:30.363 11:24:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:30.363 11:24:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:30.363 11:24:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:30.363 [2024-07-15 11:24:07.643696] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:05:30.363 [2024-07-15 11:24:07.643807] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63279 ] 00:05:30.363 [2024-07-15 11:24:07.774528] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:30.363 [2024-07-15 11:24:07.837015] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:30.363 [2024-07-15 11:24:07.837110] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:30.363 [2024-07-15 11:24:07.837124] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:31.296 11:24:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:31.296 11:24:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # return 0 00:05:31.296 11:24:08 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=63309 00:05:31.296 11:24:08 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:05:31.296 11:24:08 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 63309 /var/tmp/spdk2.sock 00:05:31.296 11:24:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@648 -- # local es=0 00:05:31.296 11:24:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 63309 /var/tmp/spdk2.sock 00:05:31.296 11:24:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:05:31.296 11:24:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:31.296 11:24:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:05:31.296 11:24:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:31.296 11:24:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # waitforlisten 63309 /var/tmp/spdk2.sock 00:05:31.296 11:24:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@829 -- # '[' -z 63309 ']' 00:05:31.296 11:24:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:31.296 11:24:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:31.296 11:24:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:31.296 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:31.296 11:24:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:31.296 11:24:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:31.297 [2024-07-15 11:24:08.643036] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:05:31.297 [2024-07-15 11:24:08.643119] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63309 ] 00:05:31.554 [2024-07-15 11:24:08.785819] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 63279 has claimed it. 00:05:31.554 [2024-07-15 11:24:08.785900] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:32.121 ERROR: process (pid: 63309) is no longer running 00:05:32.121 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 844: kill: (63309) - No such process 00:05:32.121 11:24:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:32.121 11:24:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # return 1 00:05:32.121 11:24:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # es=1 00:05:32.121 11:24:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:32.121 11:24:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:32.121 11:24:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:32.121 11:24:09 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:05:32.121 11:24:09 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:32.121 11:24:09 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:32.121 11:24:09 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:32.121 11:24:09 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 63279 00:05:32.121 11:24:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@948 -- # '[' -z 63279 ']' 00:05:32.121 11:24:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@952 -- # kill -0 63279 00:05:32.121 11:24:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@953 -- # uname 00:05:32.121 11:24:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:32.121 11:24:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 63279 00:05:32.121 killing process with pid 63279 00:05:32.121 11:24:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:32.121 11:24:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:32.121 11:24:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 63279' 00:05:32.121 11:24:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@967 -- # kill 63279 00:05:32.121 11:24:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # wait 63279 00:05:32.379 ************************************ 00:05:32.379 END TEST locking_overlapped_coremask 00:05:32.379 ************************************ 00:05:32.379 00:05:32.379 real 0m2.069s 00:05:32.379 user 0m5.911s 00:05:32.379 sys 0m0.301s 00:05:32.379 11:24:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:32.379 11:24:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:32.379 11:24:09 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:05:32.379 11:24:09 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:05:32.379 11:24:09 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:32.379 11:24:09 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:32.379 11:24:09 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:32.379 ************************************ 00:05:32.379 START TEST locking_overlapped_coremask_via_rpc 00:05:32.379 ************************************ 00:05:32.379 11:24:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1123 -- # locking_overlapped_coremask_via_rpc 00:05:32.379 11:24:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=63355 00:05:32.379 11:24:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:05:32.379 11:24:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 63355 /var/tmp/spdk.sock 00:05:32.379 11:24:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 63355 ']' 00:05:32.379 11:24:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:32.379 11:24:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:32.379 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:32.379 11:24:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:32.379 11:24:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:32.379 11:24:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:32.379 [2024-07-15 11:24:09.782016] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:05:32.379 [2024-07-15 11:24:09.782146] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63355 ] 00:05:32.638 [2024-07-15 11:24:09.924094] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:32.638 [2024-07-15 11:24:09.924316] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:32.638 [2024-07-15 11:24:09.985579] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:32.638 [2024-07-15 11:24:09.985665] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:32.638 [2024-07-15 11:24:09.985677] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:33.573 11:24:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:33.573 11:24:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:05:33.573 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:33.573 11:24:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=63385 00:05:33.573 11:24:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:05:33.573 11:24:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 63385 /var/tmp/spdk2.sock 00:05:33.573 11:24:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 63385 ']' 00:05:33.573 11:24:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:33.573 11:24:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:33.573 11:24:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:33.573 11:24:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:33.573 11:24:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:33.573 [2024-07-15 11:24:10.868587] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:05:33.573 [2024-07-15 11:24:10.868681] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63385 ] 00:05:33.573 [2024-07-15 11:24:11.012340] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:33.573 [2024-07-15 11:24:11.012402] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:33.831 [2024-07-15 11:24:11.132571] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:05:33.831 [2024-07-15 11:24:11.132601] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:05:33.831 [2024-07-15 11:24:11.132604] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:34.763 11:24:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:34.763 11:24:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:05:34.763 11:24:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:05:34.763 11:24:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:34.763 11:24:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:34.763 11:24:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:34.763 11:24:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:34.763 11:24:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@648 -- # local es=0 00:05:34.763 11:24:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:34.763 11:24:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:05:34.763 11:24:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:34.763 11:24:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:05:34.763 11:24:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:34.763 11:24:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:34.763 11:24:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:34.763 11:24:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:34.763 [2024-07-15 11:24:11.998729] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 63355 has claimed it. 00:05:34.763 2024/07/15 11:24:12 error on JSON-RPC call, method: framework_enable_cpumask_locks, params: map[], err: error received for framework_enable_cpumask_locks method, err: Code=-32603 Msg=Failed to claim CPU core: 2 00:05:34.763 request: 00:05:34.763 { 00:05:34.763 "method": "framework_enable_cpumask_locks", 00:05:34.763 "params": {} 00:05:34.763 } 00:05:34.763 Got JSON-RPC error response 00:05:34.763 GoRPCClient: error on JSON-RPC call 00:05:34.763 11:24:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:05:34.763 11:24:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # es=1 00:05:34.763 11:24:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:34.763 11:24:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:34.763 11:24:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:34.763 11:24:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 63355 /var/tmp/spdk.sock 00:05:34.763 11:24:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 63355 ']' 00:05:34.763 11:24:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:34.763 11:24:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:34.763 11:24:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:34.763 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:34.763 11:24:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:34.763 11:24:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:35.021 11:24:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:35.021 11:24:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:05:35.021 11:24:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 63385 /var/tmp/spdk2.sock 00:05:35.021 11:24:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 63385 ']' 00:05:35.021 11:24:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:35.021 11:24:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:35.021 11:24:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:35.021 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:35.021 11:24:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:35.021 11:24:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:35.587 11:24:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:35.587 11:24:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:05:35.587 11:24:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:05:35.587 ************************************ 00:05:35.587 END TEST locking_overlapped_coremask_via_rpc 00:05:35.587 ************************************ 00:05:35.587 11:24:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:35.587 11:24:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:35.587 11:24:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:35.587 00:05:35.587 real 0m3.173s 00:05:35.587 user 0m1.809s 00:05:35.587 sys 0m0.281s 00:05:35.587 11:24:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:35.587 11:24:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:35.587 11:24:12 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:05:35.587 11:24:12 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:05:35.587 11:24:12 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 63355 ]] 00:05:35.587 11:24:12 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 63355 00:05:35.587 11:24:12 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 63355 ']' 00:05:35.587 11:24:12 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 63355 00:05:35.587 11:24:12 event.cpu_locks -- common/autotest_common.sh@953 -- # uname 00:05:35.587 11:24:12 event.cpu_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:35.587 11:24:12 event.cpu_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 63355 00:05:35.587 killing process with pid 63355 00:05:35.587 11:24:12 event.cpu_locks -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:35.587 11:24:12 event.cpu_locks -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:35.587 11:24:12 event.cpu_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 63355' 00:05:35.587 11:24:12 event.cpu_locks -- common/autotest_common.sh@967 -- # kill 63355 00:05:35.587 11:24:12 event.cpu_locks -- common/autotest_common.sh@972 -- # wait 63355 00:05:35.855 11:24:13 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 63385 ]] 00:05:35.855 11:24:13 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 63385 00:05:35.855 11:24:13 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 63385 ']' 00:05:35.855 11:24:13 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 63385 00:05:35.855 11:24:13 event.cpu_locks -- common/autotest_common.sh@953 -- # uname 00:05:35.855 11:24:13 event.cpu_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:35.855 11:24:13 event.cpu_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 63385 00:05:35.855 killing process with pid 63385 00:05:35.855 11:24:13 event.cpu_locks -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:05:35.855 11:24:13 event.cpu_locks -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:05:35.855 11:24:13 event.cpu_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 63385' 00:05:35.855 11:24:13 event.cpu_locks -- common/autotest_common.sh@967 -- # kill 63385 00:05:35.855 11:24:13 event.cpu_locks -- common/autotest_common.sh@972 -- # wait 63385 00:05:36.112 11:24:13 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:05:36.112 11:24:13 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:05:36.112 11:24:13 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 63355 ]] 00:05:36.112 11:24:13 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 63355 00:05:36.112 11:24:13 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 63355 ']' 00:05:36.112 11:24:13 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 63355 00:05:36.112 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 952: kill: (63355) - No such process 00:05:36.112 Process with pid 63355 is not found 00:05:36.112 11:24:13 event.cpu_locks -- common/autotest_common.sh@975 -- # echo 'Process with pid 63355 is not found' 00:05:36.112 11:24:13 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 63385 ]] 00:05:36.112 11:24:13 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 63385 00:05:36.112 11:24:13 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 63385 ']' 00:05:36.112 11:24:13 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 63385 00:05:36.112 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 952: kill: (63385) - No such process 00:05:36.112 Process with pid 63385 is not found 00:05:36.112 11:24:13 event.cpu_locks -- common/autotest_common.sh@975 -- # echo 'Process with pid 63385 is not found' 00:05:36.112 11:24:13 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:05:36.112 00:05:36.112 real 0m19.140s 00:05:36.112 user 0m37.122s 00:05:36.112 sys 0m4.569s 00:05:36.112 11:24:13 event.cpu_locks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:36.112 ************************************ 00:05:36.112 END TEST cpu_locks 00:05:36.112 11:24:13 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:36.112 ************************************ 00:05:36.112 11:24:13 event -- common/autotest_common.sh@1142 -- # return 0 00:05:36.112 00:05:36.112 real 0m45.280s 00:05:36.112 user 1m32.262s 00:05:36.112 sys 0m8.153s 00:05:36.112 11:24:13 event -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:36.112 ************************************ 00:05:36.112 END TEST event 00:05:36.112 11:24:13 event -- common/autotest_common.sh@10 -- # set +x 00:05:36.112 ************************************ 00:05:36.112 11:24:13 -- common/autotest_common.sh@1142 -- # return 0 00:05:36.112 11:24:13 -- spdk/autotest.sh@182 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:05:36.112 11:24:13 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:36.112 11:24:13 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:36.112 11:24:13 -- common/autotest_common.sh@10 -- # set +x 00:05:36.369 ************************************ 00:05:36.369 START TEST thread 00:05:36.369 ************************************ 00:05:36.369 11:24:13 thread -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:05:36.369 * Looking for test storage... 00:05:36.369 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:05:36.369 11:24:13 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:36.369 11:24:13 thread -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:05:36.369 11:24:13 thread -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:36.369 11:24:13 thread -- common/autotest_common.sh@10 -- # set +x 00:05:36.369 ************************************ 00:05:36.369 START TEST thread_poller_perf 00:05:36.369 ************************************ 00:05:36.369 11:24:13 thread.thread_poller_perf -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:36.369 [2024-07-15 11:24:13.682287] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:05:36.369 [2024-07-15 11:24:13.682428] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63537 ] 00:05:36.369 [2024-07-15 11:24:13.821627] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:36.627 [2024-07-15 11:24:13.909284] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:36.627 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:05:37.560 ====================================== 00:05:37.560 busy:2214540347 (cyc) 00:05:37.560 total_run_count: 288000 00:05:37.560 tsc_hz: 2200000000 (cyc) 00:05:37.560 ====================================== 00:05:37.560 poller_cost: 7689 (cyc), 3495 (nsec) 00:05:37.560 00:05:37.560 real 0m1.338s 00:05:37.560 user 0m1.178s 00:05:37.560 sys 0m0.049s 00:05:37.560 11:24:15 thread.thread_poller_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:37.560 11:24:15 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:05:37.560 ************************************ 00:05:37.560 END TEST thread_poller_perf 00:05:37.560 ************************************ 00:05:37.560 11:24:15 thread -- common/autotest_common.sh@1142 -- # return 0 00:05:37.560 11:24:15 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:37.560 11:24:15 thread -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:05:37.560 11:24:15 thread -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:37.560 11:24:15 thread -- common/autotest_common.sh@10 -- # set +x 00:05:37.818 ************************************ 00:05:37.818 START TEST thread_poller_perf 00:05:37.818 ************************************ 00:05:37.818 11:24:15 thread.thread_poller_perf -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:37.818 [2024-07-15 11:24:15.061441] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:05:37.818 [2024-07-15 11:24:15.061601] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63573 ] 00:05:37.818 [2024-07-15 11:24:15.202676] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:37.818 [2024-07-15 11:24:15.272085] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:37.818 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:05:39.191 ====================================== 00:05:39.191 busy:2202482322 (cyc) 00:05:39.191 total_run_count: 3814000 00:05:39.191 tsc_hz: 2200000000 (cyc) 00:05:39.191 ====================================== 00:05:39.191 poller_cost: 577 (cyc), 262 (nsec) 00:05:39.191 00:05:39.191 real 0m1.303s 00:05:39.191 user 0m1.150s 00:05:39.191 sys 0m0.045s 00:05:39.191 11:24:16 thread.thread_poller_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:39.191 11:24:16 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:05:39.191 ************************************ 00:05:39.191 END TEST thread_poller_perf 00:05:39.191 ************************************ 00:05:39.191 11:24:16 thread -- common/autotest_common.sh@1142 -- # return 0 00:05:39.191 11:24:16 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:05:39.191 ************************************ 00:05:39.191 END TEST thread 00:05:39.191 ************************************ 00:05:39.191 00:05:39.191 real 0m2.786s 00:05:39.191 user 0m2.395s 00:05:39.191 sys 0m0.172s 00:05:39.191 11:24:16 thread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:39.191 11:24:16 thread -- common/autotest_common.sh@10 -- # set +x 00:05:39.191 11:24:16 -- common/autotest_common.sh@1142 -- # return 0 00:05:39.191 11:24:16 -- spdk/autotest.sh@183 -- # run_test accel /home/vagrant/spdk_repo/spdk/test/accel/accel.sh 00:05:39.191 11:24:16 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:39.191 11:24:16 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:39.191 11:24:16 -- common/autotest_common.sh@10 -- # set +x 00:05:39.191 ************************************ 00:05:39.191 START TEST accel 00:05:39.191 ************************************ 00:05:39.191 11:24:16 accel -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/accel/accel.sh 00:05:39.191 * Looking for test storage... 00:05:39.191 * Found test storage at /home/vagrant/spdk_repo/spdk/test/accel 00:05:39.191 11:24:16 accel -- accel/accel.sh@81 -- # declare -A expected_opcs 00:05:39.191 11:24:16 accel -- accel/accel.sh@82 -- # get_expected_opcs 00:05:39.191 11:24:16 accel -- accel/accel.sh@60 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:39.191 11:24:16 accel -- accel/accel.sh@62 -- # spdk_tgt_pid=63646 00:05:39.191 11:24:16 accel -- accel/accel.sh@63 -- # waitforlisten 63646 00:05:39.191 11:24:16 accel -- accel/accel.sh@61 -- # build_accel_config 00:05:39.191 11:24:16 accel -- accel/accel.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:05:39.191 11:24:16 accel -- common/autotest_common.sh@829 -- # '[' -z 63646 ']' 00:05:39.191 11:24:16 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:39.191 11:24:16 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:39.191 11:24:16 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:39.191 11:24:16 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:39.191 11:24:16 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:39.191 11:24:16 accel -- accel/accel.sh@40 -- # local IFS=, 00:05:39.191 11:24:16 accel -- accel/accel.sh@41 -- # jq -r . 00:05:39.191 11:24:16 accel -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:39.191 11:24:16 accel -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:39.191 11:24:16 accel -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:39.191 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:39.191 11:24:16 accel -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:39.191 11:24:16 accel -- common/autotest_common.sh@10 -- # set +x 00:05:39.191 [2024-07-15 11:24:16.553928] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:05:39.191 [2024-07-15 11:24:16.554014] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63646 ] 00:05:39.449 [2024-07-15 11:24:16.685188] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:39.449 [2024-07-15 11:24:16.772539] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:40.383 11:24:17 accel -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:40.383 11:24:17 accel -- common/autotest_common.sh@862 -- # return 0 00:05:40.383 11:24:17 accel -- accel/accel.sh@65 -- # [[ 0 -gt 0 ]] 00:05:40.383 11:24:17 accel -- accel/accel.sh@66 -- # [[ 0 -gt 0 ]] 00:05:40.383 11:24:17 accel -- accel/accel.sh@67 -- # [[ 0 -gt 0 ]] 00:05:40.383 11:24:17 accel -- accel/accel.sh@68 -- # [[ -n '' ]] 00:05:40.383 11:24:17 accel -- accel/accel.sh@70 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:05:40.383 11:24:17 accel -- accel/accel.sh@70 -- # rpc_cmd accel_get_opc_assignments 00:05:40.383 11:24:17 accel -- accel/accel.sh@70 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:05:40.383 11:24:17 accel -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:40.383 11:24:17 accel -- common/autotest_common.sh@10 -- # set +x 00:05:40.383 11:24:17 accel -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:40.383 11:24:17 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:40.383 11:24:17 accel -- accel/accel.sh@72 -- # IFS== 00:05:40.383 11:24:17 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:40.383 11:24:17 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:40.383 11:24:17 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:40.383 11:24:17 accel -- accel/accel.sh@72 -- # IFS== 00:05:40.383 11:24:17 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:40.383 11:24:17 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:40.383 11:24:17 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:40.383 11:24:17 accel -- accel/accel.sh@72 -- # IFS== 00:05:40.383 11:24:17 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:40.383 11:24:17 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:40.383 11:24:17 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:40.383 11:24:17 accel -- accel/accel.sh@72 -- # IFS== 00:05:40.383 11:24:17 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:40.383 11:24:17 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:40.383 11:24:17 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:40.383 11:24:17 accel -- accel/accel.sh@72 -- # IFS== 00:05:40.383 11:24:17 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:40.383 11:24:17 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:40.383 11:24:17 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:40.383 11:24:17 accel -- accel/accel.sh@72 -- # IFS== 00:05:40.383 11:24:17 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:40.383 11:24:17 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:40.383 11:24:17 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:40.383 11:24:17 accel -- accel/accel.sh@72 -- # IFS== 00:05:40.383 11:24:17 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:40.383 11:24:17 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:40.383 11:24:17 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:40.383 11:24:17 accel -- accel/accel.sh@72 -- # IFS== 00:05:40.383 11:24:17 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:40.383 11:24:17 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:40.383 11:24:17 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:40.383 11:24:17 accel -- accel/accel.sh@72 -- # IFS== 00:05:40.383 11:24:17 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:40.383 11:24:17 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:40.383 11:24:17 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:40.383 11:24:17 accel -- accel/accel.sh@72 -- # IFS== 00:05:40.383 11:24:17 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:40.383 11:24:17 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:40.383 11:24:17 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:40.383 11:24:17 accel -- accel/accel.sh@72 -- # IFS== 00:05:40.383 11:24:17 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:40.383 11:24:17 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:40.383 11:24:17 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:40.383 11:24:17 accel -- accel/accel.sh@72 -- # IFS== 00:05:40.383 11:24:17 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:40.383 11:24:17 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:40.383 11:24:17 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:40.383 11:24:17 accel -- accel/accel.sh@72 -- # IFS== 00:05:40.383 11:24:17 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:40.383 11:24:17 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:40.383 11:24:17 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:40.383 11:24:17 accel -- accel/accel.sh@72 -- # IFS== 00:05:40.383 11:24:17 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:40.383 11:24:17 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:40.383 11:24:17 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:40.383 11:24:17 accel -- accel/accel.sh@72 -- # IFS== 00:05:40.383 11:24:17 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:40.383 11:24:17 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:40.383 11:24:17 accel -- accel/accel.sh@75 -- # killprocess 63646 00:05:40.383 11:24:17 accel -- common/autotest_common.sh@948 -- # '[' -z 63646 ']' 00:05:40.383 11:24:17 accel -- common/autotest_common.sh@952 -- # kill -0 63646 00:05:40.383 11:24:17 accel -- common/autotest_common.sh@953 -- # uname 00:05:40.383 11:24:17 accel -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:40.383 11:24:17 accel -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 63646 00:05:40.383 11:24:17 accel -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:40.383 11:24:17 accel -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:40.383 killing process with pid 63646 00:05:40.383 11:24:17 accel -- common/autotest_common.sh@966 -- # echo 'killing process with pid 63646' 00:05:40.383 11:24:17 accel -- common/autotest_common.sh@967 -- # kill 63646 00:05:40.383 11:24:17 accel -- common/autotest_common.sh@972 -- # wait 63646 00:05:40.640 11:24:17 accel -- accel/accel.sh@76 -- # trap - ERR 00:05:40.640 11:24:17 accel -- accel/accel.sh@89 -- # run_test accel_help accel_perf -h 00:05:40.640 11:24:17 accel -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:05:40.640 11:24:17 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:40.640 11:24:17 accel -- common/autotest_common.sh@10 -- # set +x 00:05:40.640 11:24:17 accel.accel_help -- common/autotest_common.sh@1123 -- # accel_perf -h 00:05:40.640 11:24:17 accel.accel_help -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:05:40.640 11:24:17 accel.accel_help -- accel/accel.sh@12 -- # build_accel_config 00:05:40.640 11:24:17 accel.accel_help -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:40.640 11:24:17 accel.accel_help -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:40.640 11:24:17 accel.accel_help -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:40.640 11:24:17 accel.accel_help -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:40.640 11:24:17 accel.accel_help -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:40.640 11:24:17 accel.accel_help -- accel/accel.sh@40 -- # local IFS=, 00:05:40.640 11:24:17 accel.accel_help -- accel/accel.sh@41 -- # jq -r . 00:05:40.640 11:24:17 accel.accel_help -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:40.640 11:24:17 accel.accel_help -- common/autotest_common.sh@10 -- # set +x 00:05:40.640 11:24:17 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:40.640 11:24:17 accel -- accel/accel.sh@91 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:05:40.640 11:24:17 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:05:40.640 11:24:17 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:40.640 11:24:17 accel -- common/autotest_common.sh@10 -- # set +x 00:05:40.640 ************************************ 00:05:40.640 START TEST accel_missing_filename 00:05:40.640 ************************************ 00:05:40.640 11:24:17 accel.accel_missing_filename -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w compress 00:05:40.640 11:24:18 accel.accel_missing_filename -- common/autotest_common.sh@648 -- # local es=0 00:05:40.640 11:24:18 accel.accel_missing_filename -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress 00:05:40.640 11:24:18 accel.accel_missing_filename -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:05:40.640 11:24:18 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:40.640 11:24:18 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # type -t accel_perf 00:05:40.640 11:24:18 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:40.640 11:24:18 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress 00:05:40.640 11:24:18 accel.accel_missing_filename -- accel/accel.sh@12 -- # build_accel_config 00:05:40.640 11:24:18 accel.accel_missing_filename -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:05:40.640 11:24:18 accel.accel_missing_filename -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:40.640 11:24:18 accel.accel_missing_filename -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:40.640 11:24:18 accel.accel_missing_filename -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:40.640 11:24:18 accel.accel_missing_filename -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:40.640 11:24:18 accel.accel_missing_filename -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:40.640 11:24:18 accel.accel_missing_filename -- accel/accel.sh@40 -- # local IFS=, 00:05:40.640 11:24:18 accel.accel_missing_filename -- accel/accel.sh@41 -- # jq -r . 00:05:40.640 [2024-07-15 11:24:18.024676] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:05:40.640 [2024-07-15 11:24:18.024767] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63711 ] 00:05:40.897 [2024-07-15 11:24:18.158204] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:40.897 [2024-07-15 11:24:18.216966] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:40.897 [2024-07-15 11:24:18.246947] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:40.897 [2024-07-15 11:24:18.285383] accel_perf.c:1463:main: *ERROR*: ERROR starting application 00:05:40.897 A filename is required. 00:05:40.897 11:24:18 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # es=234 00:05:40.897 11:24:18 accel.accel_missing_filename -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:40.897 11:24:18 accel.accel_missing_filename -- common/autotest_common.sh@660 -- # es=106 00:05:40.897 11:24:18 accel.accel_missing_filename -- common/autotest_common.sh@661 -- # case "$es" in 00:05:40.897 11:24:18 accel.accel_missing_filename -- common/autotest_common.sh@668 -- # es=1 00:05:40.897 11:24:18 accel.accel_missing_filename -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:40.897 00:05:40.897 real 0m0.370s 00:05:40.897 user 0m0.253s 00:05:40.897 sys 0m0.068s 00:05:40.897 11:24:18 accel.accel_missing_filename -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:40.897 11:24:18 accel.accel_missing_filename -- common/autotest_common.sh@10 -- # set +x 00:05:40.897 ************************************ 00:05:40.897 END TEST accel_missing_filename 00:05:40.897 ************************************ 00:05:41.161 11:24:18 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:41.161 11:24:18 accel -- accel/accel.sh@93 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:05:41.161 11:24:18 accel -- common/autotest_common.sh@1099 -- # '[' 10 -le 1 ']' 00:05:41.161 11:24:18 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:41.161 11:24:18 accel -- common/autotest_common.sh@10 -- # set +x 00:05:41.161 ************************************ 00:05:41.161 START TEST accel_compress_verify 00:05:41.161 ************************************ 00:05:41.161 11:24:18 accel.accel_compress_verify -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:05:41.161 11:24:18 accel.accel_compress_verify -- common/autotest_common.sh@648 -- # local es=0 00:05:41.161 11:24:18 accel.accel_compress_verify -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:05:41.161 11:24:18 accel.accel_compress_verify -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:05:41.161 11:24:18 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:41.161 11:24:18 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # type -t accel_perf 00:05:41.161 11:24:18 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:41.161 11:24:18 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:05:41.161 11:24:18 accel.accel_compress_verify -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:05:41.161 11:24:18 accel.accel_compress_verify -- accel/accel.sh@12 -- # build_accel_config 00:05:41.161 11:24:18 accel.accel_compress_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:41.161 11:24:18 accel.accel_compress_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:41.161 11:24:18 accel.accel_compress_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:41.161 11:24:18 accel.accel_compress_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:41.161 11:24:18 accel.accel_compress_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:41.161 11:24:18 accel.accel_compress_verify -- accel/accel.sh@40 -- # local IFS=, 00:05:41.161 11:24:18 accel.accel_compress_verify -- accel/accel.sh@41 -- # jq -r . 00:05:41.161 [2024-07-15 11:24:18.438071] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:05:41.161 [2024-07-15 11:24:18.438168] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63736 ] 00:05:41.161 [2024-07-15 11:24:18.571881] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:41.161 [2024-07-15 11:24:18.631048] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:41.452 [2024-07-15 11:24:18.662153] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:41.453 [2024-07-15 11:24:18.701632] accel_perf.c:1463:main: *ERROR*: ERROR starting application 00:05:41.453 00:05:41.453 Compression does not support the verify option, aborting. 00:05:41.453 11:24:18 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # es=161 00:05:41.453 11:24:18 accel.accel_compress_verify -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:41.453 11:24:18 accel.accel_compress_verify -- common/autotest_common.sh@660 -- # es=33 00:05:41.453 11:24:18 accel.accel_compress_verify -- common/autotest_common.sh@661 -- # case "$es" in 00:05:41.453 11:24:18 accel.accel_compress_verify -- common/autotest_common.sh@668 -- # es=1 00:05:41.453 11:24:18 accel.accel_compress_verify -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:41.453 00:05:41.453 real 0m0.364s 00:05:41.453 user 0m0.229s 00:05:41.453 sys 0m0.073s 00:05:41.453 11:24:18 accel.accel_compress_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:41.453 11:24:18 accel.accel_compress_verify -- common/autotest_common.sh@10 -- # set +x 00:05:41.453 ************************************ 00:05:41.453 END TEST accel_compress_verify 00:05:41.453 ************************************ 00:05:41.453 11:24:18 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:41.453 11:24:18 accel -- accel/accel.sh@95 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:05:41.453 11:24:18 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:05:41.453 11:24:18 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:41.453 11:24:18 accel -- common/autotest_common.sh@10 -- # set +x 00:05:41.453 ************************************ 00:05:41.453 START TEST accel_wrong_workload 00:05:41.453 ************************************ 00:05:41.453 11:24:18 accel.accel_wrong_workload -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w foobar 00:05:41.453 11:24:18 accel.accel_wrong_workload -- common/autotest_common.sh@648 -- # local es=0 00:05:41.453 11:24:18 accel.accel_wrong_workload -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:05:41.453 11:24:18 accel.accel_wrong_workload -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:05:41.453 11:24:18 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:41.453 11:24:18 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # type -t accel_perf 00:05:41.453 11:24:18 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:41.453 11:24:18 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w foobar 00:05:41.453 11:24:18 accel.accel_wrong_workload -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:05:41.453 11:24:18 accel.accel_wrong_workload -- accel/accel.sh@12 -- # build_accel_config 00:05:41.453 11:24:18 accel.accel_wrong_workload -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:41.453 11:24:18 accel.accel_wrong_workload -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:41.453 11:24:18 accel.accel_wrong_workload -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:41.453 11:24:18 accel.accel_wrong_workload -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:41.453 11:24:18 accel.accel_wrong_workload -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:41.453 11:24:18 accel.accel_wrong_workload -- accel/accel.sh@40 -- # local IFS=, 00:05:41.453 11:24:18 accel.accel_wrong_workload -- accel/accel.sh@41 -- # jq -r . 00:05:41.453 Unsupported workload type: foobar 00:05:41.453 [2024-07-15 11:24:18.841727] app.c:1451:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:05:41.453 accel_perf options: 00:05:41.453 [-h help message] 00:05:41.453 [-q queue depth per core] 00:05:41.453 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:05:41.453 [-T number of threads per core 00:05:41.453 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:05:41.453 [-t time in seconds] 00:05:41.453 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:05:41.453 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:05:41.453 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:05:41.453 [-l for compress/decompress workloads, name of uncompressed input file 00:05:41.453 [-S for crc32c workload, use this seed value (default 0) 00:05:41.453 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:05:41.453 [-f for fill workload, use this BYTE value (default 255) 00:05:41.453 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:05:41.453 [-y verify result if this switch is on] 00:05:41.453 [-a tasks to allocate per core (default: same value as -q)] 00:05:41.453 Can be used to spread operations across a wider range of memory. 00:05:41.453 11:24:18 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # es=1 00:05:41.453 11:24:18 accel.accel_wrong_workload -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:41.453 11:24:18 accel.accel_wrong_workload -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:41.453 11:24:18 accel.accel_wrong_workload -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:41.453 00:05:41.453 real 0m0.028s 00:05:41.453 user 0m0.019s 00:05:41.453 sys 0m0.009s 00:05:41.453 11:24:18 accel.accel_wrong_workload -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:41.453 11:24:18 accel.accel_wrong_workload -- common/autotest_common.sh@10 -- # set +x 00:05:41.453 ************************************ 00:05:41.453 END TEST accel_wrong_workload 00:05:41.453 ************************************ 00:05:41.453 11:24:18 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:41.453 11:24:18 accel -- accel/accel.sh@97 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:05:41.453 11:24:18 accel -- common/autotest_common.sh@1099 -- # '[' 10 -le 1 ']' 00:05:41.453 11:24:18 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:41.453 11:24:18 accel -- common/autotest_common.sh@10 -- # set +x 00:05:41.453 ************************************ 00:05:41.453 START TEST accel_negative_buffers 00:05:41.453 ************************************ 00:05:41.453 11:24:18 accel.accel_negative_buffers -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:05:41.453 11:24:18 accel.accel_negative_buffers -- common/autotest_common.sh@648 -- # local es=0 00:05:41.453 11:24:18 accel.accel_negative_buffers -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:05:41.453 11:24:18 accel.accel_negative_buffers -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:05:41.453 11:24:18 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:41.453 11:24:18 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # type -t accel_perf 00:05:41.453 11:24:18 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:41.453 11:24:18 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w xor -y -x -1 00:05:41.453 11:24:18 accel.accel_negative_buffers -- accel/accel.sh@12 -- # build_accel_config 00:05:41.453 11:24:18 accel.accel_negative_buffers -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:05:41.453 11:24:18 accel.accel_negative_buffers -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:41.453 11:24:18 accel.accel_negative_buffers -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:41.453 11:24:18 accel.accel_negative_buffers -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:41.453 11:24:18 accel.accel_negative_buffers -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:41.453 11:24:18 accel.accel_negative_buffers -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:41.453 11:24:18 accel.accel_negative_buffers -- accel/accel.sh@40 -- # local IFS=, 00:05:41.453 11:24:18 accel.accel_negative_buffers -- accel/accel.sh@41 -- # jq -r . 00:05:41.453 -x option must be non-negative. 00:05:41.453 [2024-07-15 11:24:18.912984] app.c:1451:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:05:41.453 accel_perf options: 00:05:41.453 [-h help message] 00:05:41.453 [-q queue depth per core] 00:05:41.453 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:05:41.453 [-T number of threads per core 00:05:41.453 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:05:41.453 [-t time in seconds] 00:05:41.453 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:05:41.453 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:05:41.453 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:05:41.453 [-l for compress/decompress workloads, name of uncompressed input file 00:05:41.453 [-S for crc32c workload, use this seed value (default 0) 00:05:41.453 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:05:41.453 [-f for fill workload, use this BYTE value (default 255) 00:05:41.453 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:05:41.453 [-y verify result if this switch is on] 00:05:41.453 [-a tasks to allocate per core (default: same value as -q)] 00:05:41.453 Can be used to spread operations across a wider range of memory. 00:05:41.453 11:24:18 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # es=1 00:05:41.453 11:24:18 accel.accel_negative_buffers -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:41.453 11:24:18 accel.accel_negative_buffers -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:41.453 11:24:18 accel.accel_negative_buffers -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:41.453 00:05:41.453 real 0m0.030s 00:05:41.453 user 0m0.015s 00:05:41.453 sys 0m0.014s 00:05:41.453 11:24:18 accel.accel_negative_buffers -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:41.453 11:24:18 accel.accel_negative_buffers -- common/autotest_common.sh@10 -- # set +x 00:05:41.453 ************************************ 00:05:41.453 END TEST accel_negative_buffers 00:05:41.453 ************************************ 00:05:41.712 11:24:18 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:41.712 11:24:18 accel -- accel/accel.sh@101 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:05:41.712 11:24:18 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:05:41.712 11:24:18 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:41.712 11:24:18 accel -- common/autotest_common.sh@10 -- # set +x 00:05:41.712 ************************************ 00:05:41.712 START TEST accel_crc32c 00:05:41.712 ************************************ 00:05:41.712 11:24:18 accel.accel_crc32c -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w crc32c -S 32 -y 00:05:41.712 11:24:18 accel.accel_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:05:41.712 11:24:18 accel.accel_crc32c -- accel/accel.sh@17 -- # local accel_module 00:05:41.712 11:24:18 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:41.712 11:24:18 accel.accel_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:05:41.712 11:24:18 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:41.712 11:24:18 accel.accel_crc32c -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:05:41.712 11:24:18 accel.accel_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:05:41.712 11:24:18 accel.accel_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:41.712 11:24:18 accel.accel_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:41.712 11:24:18 accel.accel_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:41.712 11:24:18 accel.accel_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:41.712 11:24:18 accel.accel_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:41.712 11:24:18 accel.accel_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:05:41.712 11:24:18 accel.accel_crc32c -- accel/accel.sh@41 -- # jq -r . 00:05:41.712 [2024-07-15 11:24:18.977112] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:05:41.712 [2024-07-15 11:24:18.977220] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63794 ] 00:05:41.712 [2024-07-15 11:24:19.111463] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:41.712 [2024-07-15 11:24:19.179960] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:41.969 11:24:19 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:41.969 11:24:19 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:41.969 11:24:19 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:41.969 11:24:19 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:41.969 11:24:19 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:41.969 11:24:19 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:41.969 11:24:19 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:41.969 11:24:19 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:41.969 11:24:19 accel.accel_crc32c -- accel/accel.sh@20 -- # val=0x1 00:05:41.969 11:24:19 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:41.969 11:24:19 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:41.969 11:24:19 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:41.969 11:24:19 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:41.969 11:24:19 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:41.969 11:24:19 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:41.969 11:24:19 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:41.969 11:24:19 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:41.969 11:24:19 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:41.970 11:24:19 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:41.970 11:24:19 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:41.970 11:24:19 accel.accel_crc32c -- accel/accel.sh@20 -- # val=crc32c 00:05:41.970 11:24:19 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:41.970 11:24:19 accel.accel_crc32c -- accel/accel.sh@23 -- # accel_opc=crc32c 00:05:41.970 11:24:19 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:41.970 11:24:19 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:41.970 11:24:19 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:05:41.970 11:24:19 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:41.970 11:24:19 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:41.970 11:24:19 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:41.970 11:24:19 accel.accel_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:41.970 11:24:19 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:41.970 11:24:19 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:41.970 11:24:19 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:41.970 11:24:19 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:41.970 11:24:19 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:41.970 11:24:19 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:41.970 11:24:19 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:41.970 11:24:19 accel.accel_crc32c -- accel/accel.sh@20 -- # val=software 00:05:41.970 11:24:19 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:41.970 11:24:19 accel.accel_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:05:41.970 11:24:19 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:41.970 11:24:19 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:41.970 11:24:19 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:05:41.970 11:24:19 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:41.970 11:24:19 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:41.970 11:24:19 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:41.970 11:24:19 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:05:41.970 11:24:19 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:41.970 11:24:19 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:41.970 11:24:19 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:41.970 11:24:19 accel.accel_crc32c -- accel/accel.sh@20 -- # val=1 00:05:41.970 11:24:19 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:41.970 11:24:19 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:41.970 11:24:19 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:41.970 11:24:19 accel.accel_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:05:41.970 11:24:19 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:41.970 11:24:19 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:41.970 11:24:19 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:41.970 11:24:19 accel.accel_crc32c -- accel/accel.sh@20 -- # val=Yes 00:05:41.970 11:24:19 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:41.970 11:24:19 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:41.970 11:24:19 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:41.970 11:24:19 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:41.970 11:24:19 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:41.970 11:24:19 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:41.970 11:24:19 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:41.970 11:24:19 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:41.970 11:24:19 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:41.970 11:24:19 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:41.970 11:24:19 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:42.904 11:24:20 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:42.904 11:24:20 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:42.904 11:24:20 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:42.904 11:24:20 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:42.904 11:24:20 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:42.904 11:24:20 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:42.904 11:24:20 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:42.904 11:24:20 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:42.904 11:24:20 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:42.904 11:24:20 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:42.904 11:24:20 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:42.904 11:24:20 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:42.904 11:24:20 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:42.904 11:24:20 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:42.904 11:24:20 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:42.904 11:24:20 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:42.904 11:24:20 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:42.904 11:24:20 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:42.904 11:24:20 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:42.904 11:24:20 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:42.904 11:24:20 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:42.904 11:24:20 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:42.904 11:24:20 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:42.904 11:24:20 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:42.904 11:24:20 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:42.904 11:24:20 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:05:42.904 11:24:20 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:42.904 00:05:42.904 real 0m1.390s 00:05:42.904 user 0m1.219s 00:05:42.904 sys 0m0.075s 00:05:42.904 11:24:20 accel.accel_crc32c -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:42.904 11:24:20 accel.accel_crc32c -- common/autotest_common.sh@10 -- # set +x 00:05:42.904 ************************************ 00:05:42.904 END TEST accel_crc32c 00:05:42.904 ************************************ 00:05:43.162 11:24:20 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:43.162 11:24:20 accel -- accel/accel.sh@102 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:05:43.162 11:24:20 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:05:43.162 11:24:20 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:43.162 11:24:20 accel -- common/autotest_common.sh@10 -- # set +x 00:05:43.162 ************************************ 00:05:43.162 START TEST accel_crc32c_C2 00:05:43.162 ************************************ 00:05:43.162 11:24:20 accel.accel_crc32c_C2 -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w crc32c -y -C 2 00:05:43.162 11:24:20 accel.accel_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:05:43.162 11:24:20 accel.accel_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:05:43.162 11:24:20 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:43.162 11:24:20 accel.accel_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:05:43.162 11:24:20 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:43.162 11:24:20 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:05:43.162 11:24:20 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:05:43.162 11:24:20 accel.accel_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:43.162 11:24:20 accel.accel_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:43.162 11:24:20 accel.accel_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:43.162 11:24:20 accel.accel_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:43.162 11:24:20 accel.accel_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:43.162 11:24:20 accel.accel_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:05:43.162 11:24:20 accel.accel_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:05:43.162 [2024-07-15 11:24:20.409184] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:05:43.162 [2024-07-15 11:24:20.409277] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63829 ] 00:05:43.162 [2024-07-15 11:24:20.544131] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:43.420 [2024-07-15 11:24:20.649131] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:43.420 11:24:20 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:43.420 11:24:20 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:43.420 11:24:20 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:43.420 11:24:20 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:43.420 11:24:20 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:43.420 11:24:20 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:43.420 11:24:20 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:43.420 11:24:20 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:43.420 11:24:20 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:05:43.420 11:24:20 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:43.420 11:24:20 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:43.420 11:24:20 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:43.420 11:24:20 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:43.420 11:24:20 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:43.420 11:24:20 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:43.420 11:24:20 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:43.420 11:24:20 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:43.420 11:24:20 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:43.420 11:24:20 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:43.420 11:24:20 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:43.420 11:24:20 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=crc32c 00:05:43.420 11:24:20 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:43.420 11:24:20 accel.accel_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=crc32c 00:05:43.420 11:24:20 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:43.420 11:24:20 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:43.420 11:24:20 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:05:43.420 11:24:20 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:43.420 11:24:20 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:43.420 11:24:20 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:43.420 11:24:20 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:43.420 11:24:20 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:43.420 11:24:20 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:43.420 11:24:20 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:43.420 11:24:20 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:43.420 11:24:20 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:43.420 11:24:20 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:43.420 11:24:20 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:43.420 11:24:20 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:05:43.420 11:24:20 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:43.420 11:24:20 accel.accel_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:05:43.420 11:24:20 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:43.420 11:24:20 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:43.420 11:24:20 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:05:43.420 11:24:20 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:43.420 11:24:20 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:43.420 11:24:20 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:43.420 11:24:20 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:05:43.420 11:24:20 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:43.420 11:24:20 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:43.420 11:24:20 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:43.420 11:24:20 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:05:43.420 11:24:20 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:43.420 11:24:20 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:43.420 11:24:20 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:43.420 11:24:20 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:05:43.420 11:24:20 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:43.420 11:24:20 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:43.420 11:24:20 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:43.420 11:24:20 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:05:43.420 11:24:20 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:43.420 11:24:20 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:43.420 11:24:20 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:43.420 11:24:20 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:43.420 11:24:20 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:43.420 11:24:20 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:43.420 11:24:20 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:43.420 11:24:20 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:43.420 11:24:20 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:43.421 11:24:20 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:43.421 11:24:20 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:44.355 11:24:21 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:44.355 11:24:21 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:44.355 11:24:21 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:44.355 11:24:21 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:44.355 11:24:21 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:44.355 11:24:21 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:44.355 11:24:21 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:44.355 11:24:21 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:44.355 11:24:21 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:44.355 11:24:21 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:44.355 11:24:21 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:44.355 11:24:21 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:44.355 11:24:21 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:44.355 11:24:21 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:44.355 11:24:21 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:44.355 11:24:21 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:44.355 11:24:21 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:44.355 11:24:21 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:44.355 11:24:21 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:44.355 11:24:21 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:44.355 11:24:21 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:44.355 11:24:21 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:44.355 11:24:21 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:44.355 11:24:21 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:44.355 11:24:21 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:44.355 11:24:21 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:05:44.355 11:24:21 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:44.355 00:05:44.355 real 0m1.424s 00:05:44.355 user 0m1.244s 00:05:44.355 sys 0m0.082s 00:05:44.355 11:24:21 accel.accel_crc32c_C2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:44.355 ************************************ 00:05:44.355 END TEST accel_crc32c_C2 00:05:44.355 ************************************ 00:05:44.355 11:24:21 accel.accel_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:05:44.613 11:24:21 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:44.613 11:24:21 accel -- accel/accel.sh@103 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:05:44.613 11:24:21 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:05:44.613 11:24:21 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:44.613 11:24:21 accel -- common/autotest_common.sh@10 -- # set +x 00:05:44.613 ************************************ 00:05:44.613 START TEST accel_copy 00:05:44.613 ************************************ 00:05:44.613 11:24:21 accel.accel_copy -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy -y 00:05:44.613 11:24:21 accel.accel_copy -- accel/accel.sh@16 -- # local accel_opc 00:05:44.613 11:24:21 accel.accel_copy -- accel/accel.sh@17 -- # local accel_module 00:05:44.613 11:24:21 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:44.613 11:24:21 accel.accel_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:05:44.613 11:24:21 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:44.613 11:24:21 accel.accel_copy -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:05:44.613 11:24:21 accel.accel_copy -- accel/accel.sh@12 -- # build_accel_config 00:05:44.613 11:24:21 accel.accel_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:44.613 11:24:21 accel.accel_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:44.613 11:24:21 accel.accel_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:44.613 11:24:21 accel.accel_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:44.613 11:24:21 accel.accel_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:44.613 11:24:21 accel.accel_copy -- accel/accel.sh@40 -- # local IFS=, 00:05:44.613 11:24:21 accel.accel_copy -- accel/accel.sh@41 -- # jq -r . 00:05:44.613 [2024-07-15 11:24:21.875868] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:05:44.613 [2024-07-15 11:24:21.875988] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63863 ] 00:05:44.613 [2024-07-15 11:24:22.017856] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:44.871 [2024-07-15 11:24:22.120013] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:44.871 11:24:22 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:44.871 11:24:22 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:44.871 11:24:22 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:44.871 11:24:22 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:44.871 11:24:22 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:44.871 11:24:22 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:44.871 11:24:22 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:44.871 11:24:22 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:44.871 11:24:22 accel.accel_copy -- accel/accel.sh@20 -- # val=0x1 00:05:44.872 11:24:22 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:44.872 11:24:22 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:44.872 11:24:22 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:44.872 11:24:22 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:44.872 11:24:22 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:44.872 11:24:22 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:44.872 11:24:22 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:44.872 11:24:22 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:44.872 11:24:22 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:44.872 11:24:22 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:44.872 11:24:22 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:44.872 11:24:22 accel.accel_copy -- accel/accel.sh@20 -- # val=copy 00:05:44.872 11:24:22 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:44.872 11:24:22 accel.accel_copy -- accel/accel.sh@23 -- # accel_opc=copy 00:05:44.872 11:24:22 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:44.872 11:24:22 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:44.872 11:24:22 accel.accel_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:44.872 11:24:22 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:44.872 11:24:22 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:44.872 11:24:22 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:44.872 11:24:22 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:44.872 11:24:22 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:44.872 11:24:22 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:44.872 11:24:22 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:44.872 11:24:22 accel.accel_copy -- accel/accel.sh@20 -- # val=software 00:05:44.872 11:24:22 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:44.872 11:24:22 accel.accel_copy -- accel/accel.sh@22 -- # accel_module=software 00:05:44.872 11:24:22 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:44.872 11:24:22 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:44.872 11:24:22 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:05:44.872 11:24:22 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:44.872 11:24:22 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:44.872 11:24:22 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:44.872 11:24:22 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:05:44.872 11:24:22 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:44.872 11:24:22 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:44.872 11:24:22 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:44.872 11:24:22 accel.accel_copy -- accel/accel.sh@20 -- # val=1 00:05:44.872 11:24:22 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:44.872 11:24:22 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:44.872 11:24:22 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:44.872 11:24:22 accel.accel_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:05:44.872 11:24:22 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:44.872 11:24:22 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:44.872 11:24:22 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:44.872 11:24:22 accel.accel_copy -- accel/accel.sh@20 -- # val=Yes 00:05:44.872 11:24:22 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:44.872 11:24:22 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:44.872 11:24:22 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:44.872 11:24:22 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:44.872 11:24:22 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:44.872 11:24:22 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:44.872 11:24:22 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:44.872 11:24:22 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:44.872 11:24:22 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:44.872 11:24:22 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:44.872 11:24:22 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:45.808 11:24:23 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:45.808 11:24:23 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:45.808 11:24:23 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:45.808 11:24:23 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:45.808 11:24:23 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:45.808 11:24:23 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:45.808 11:24:23 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:45.808 11:24:23 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:45.808 11:24:23 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:45.808 11:24:23 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:45.808 11:24:23 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:45.808 11:24:23 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:45.808 11:24:23 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:45.808 11:24:23 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:45.808 11:24:23 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:45.808 11:24:23 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:45.808 11:24:23 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:45.808 11:24:23 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:45.808 11:24:23 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:45.808 11:24:23 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:45.808 11:24:23 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:45.808 11:24:23 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:45.808 11:24:23 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:45.808 11:24:23 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:45.808 11:24:23 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:45.808 11:24:23 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n copy ]] 00:05:45.808 11:24:23 accel.accel_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:45.808 00:05:45.808 real 0m1.425s 00:05:45.808 user 0m1.243s 00:05:45.808 sys 0m0.082s 00:05:45.808 11:24:23 accel.accel_copy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:45.808 ************************************ 00:05:45.808 END TEST accel_copy 00:05:45.808 ************************************ 00:05:45.808 11:24:23 accel.accel_copy -- common/autotest_common.sh@10 -- # set +x 00:05:46.065 11:24:23 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:46.065 11:24:23 accel -- accel/accel.sh@104 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:05:46.065 11:24:23 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:05:46.065 11:24:23 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:46.065 11:24:23 accel -- common/autotest_common.sh@10 -- # set +x 00:05:46.065 ************************************ 00:05:46.065 START TEST accel_fill 00:05:46.065 ************************************ 00:05:46.065 11:24:23 accel.accel_fill -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:05:46.065 11:24:23 accel.accel_fill -- accel/accel.sh@16 -- # local accel_opc 00:05:46.065 11:24:23 accel.accel_fill -- accel/accel.sh@17 -- # local accel_module 00:05:46.065 11:24:23 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:46.065 11:24:23 accel.accel_fill -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:05:46.065 11:24:23 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:46.065 11:24:23 accel.accel_fill -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:05:46.065 11:24:23 accel.accel_fill -- accel/accel.sh@12 -- # build_accel_config 00:05:46.065 11:24:23 accel.accel_fill -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:46.065 11:24:23 accel.accel_fill -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:46.065 11:24:23 accel.accel_fill -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:46.065 11:24:23 accel.accel_fill -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:46.065 11:24:23 accel.accel_fill -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:46.065 11:24:23 accel.accel_fill -- accel/accel.sh@40 -- # local IFS=, 00:05:46.065 11:24:23 accel.accel_fill -- accel/accel.sh@41 -- # jq -r . 00:05:46.065 [2024-07-15 11:24:23.345126] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:05:46.065 [2024-07-15 11:24:23.345261] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63900 ] 00:05:46.065 [2024-07-15 11:24:23.482008] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:46.322 [2024-07-15 11:24:23.541829] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:46.322 11:24:23 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:46.322 11:24:23 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:46.322 11:24:23 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:46.322 11:24:23 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:46.322 11:24:23 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:46.322 11:24:23 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:46.322 11:24:23 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:46.322 11:24:23 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:46.322 11:24:23 accel.accel_fill -- accel/accel.sh@20 -- # val=0x1 00:05:46.322 11:24:23 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:46.322 11:24:23 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:46.322 11:24:23 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:46.322 11:24:23 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:46.322 11:24:23 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:46.322 11:24:23 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:46.322 11:24:23 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:46.322 11:24:23 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:46.322 11:24:23 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:46.322 11:24:23 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:46.322 11:24:23 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:46.322 11:24:23 accel.accel_fill -- accel/accel.sh@20 -- # val=fill 00:05:46.322 11:24:23 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:46.322 11:24:23 accel.accel_fill -- accel/accel.sh@23 -- # accel_opc=fill 00:05:46.322 11:24:23 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:46.322 11:24:23 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:46.322 11:24:23 accel.accel_fill -- accel/accel.sh@20 -- # val=0x80 00:05:46.322 11:24:23 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:46.322 11:24:23 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:46.322 11:24:23 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:46.323 11:24:23 accel.accel_fill -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:46.323 11:24:23 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:46.323 11:24:23 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:46.323 11:24:23 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:46.323 11:24:23 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:46.323 11:24:23 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:46.323 11:24:23 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:46.323 11:24:23 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:46.323 11:24:23 accel.accel_fill -- accel/accel.sh@20 -- # val=software 00:05:46.323 11:24:23 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:46.323 11:24:23 accel.accel_fill -- accel/accel.sh@22 -- # accel_module=software 00:05:46.323 11:24:23 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:46.323 11:24:23 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:46.323 11:24:23 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:05:46.323 11:24:23 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:46.323 11:24:23 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:46.323 11:24:23 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:46.323 11:24:23 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:05:46.323 11:24:23 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:46.323 11:24:23 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:46.323 11:24:23 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:46.323 11:24:23 accel.accel_fill -- accel/accel.sh@20 -- # val=1 00:05:46.323 11:24:23 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:46.323 11:24:23 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:46.323 11:24:23 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:46.323 11:24:23 accel.accel_fill -- accel/accel.sh@20 -- # val='1 seconds' 00:05:46.323 11:24:23 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:46.323 11:24:23 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:46.323 11:24:23 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:46.323 11:24:23 accel.accel_fill -- accel/accel.sh@20 -- # val=Yes 00:05:46.323 11:24:23 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:46.323 11:24:23 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:46.323 11:24:23 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:46.323 11:24:23 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:46.323 11:24:23 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:46.323 11:24:23 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:46.323 11:24:23 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:46.323 11:24:23 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:46.323 11:24:23 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:46.323 11:24:23 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:46.323 11:24:23 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:47.257 11:24:24 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:47.257 11:24:24 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:47.257 11:24:24 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:47.257 11:24:24 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:47.257 11:24:24 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:47.257 11:24:24 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:47.257 11:24:24 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:47.257 11:24:24 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:47.257 11:24:24 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:47.257 11:24:24 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:47.257 11:24:24 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:47.257 11:24:24 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:47.257 11:24:24 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:47.257 11:24:24 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:47.257 11:24:24 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:47.257 11:24:24 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:47.257 11:24:24 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:47.257 11:24:24 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:47.257 11:24:24 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:47.257 11:24:24 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:47.257 11:24:24 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:47.257 11:24:24 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:47.257 11:24:24 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:47.257 11:24:24 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:47.257 11:24:24 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:47.257 11:24:24 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n fill ]] 00:05:47.257 11:24:24 accel.accel_fill -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:47.257 00:05:47.257 real 0m1.388s 00:05:47.257 user 0m1.208s 00:05:47.257 sys 0m0.085s 00:05:47.257 11:24:24 accel.accel_fill -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:47.257 ************************************ 00:05:47.257 END TEST accel_fill 00:05:47.257 ************************************ 00:05:47.257 11:24:24 accel.accel_fill -- common/autotest_common.sh@10 -- # set +x 00:05:47.516 11:24:24 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:47.516 11:24:24 accel -- accel/accel.sh@105 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:05:47.516 11:24:24 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:05:47.516 11:24:24 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:47.516 11:24:24 accel -- common/autotest_common.sh@10 -- # set +x 00:05:47.516 ************************************ 00:05:47.516 START TEST accel_copy_crc32c 00:05:47.516 ************************************ 00:05:47.516 11:24:24 accel.accel_copy_crc32c -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy_crc32c -y 00:05:47.516 11:24:24 accel.accel_copy_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:05:47.516 11:24:24 accel.accel_copy_crc32c -- accel/accel.sh@17 -- # local accel_module 00:05:47.516 11:24:24 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:47.516 11:24:24 accel.accel_copy_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:05:47.516 11:24:24 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:47.516 11:24:24 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:05:47.516 11:24:24 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:05:47.516 11:24:24 accel.accel_copy_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:47.516 11:24:24 accel.accel_copy_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:47.516 11:24:24 accel.accel_copy_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:47.516 11:24:24 accel.accel_copy_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:47.516 11:24:24 accel.accel_copy_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:47.516 11:24:24 accel.accel_copy_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:05:47.516 11:24:24 accel.accel_copy_crc32c -- accel/accel.sh@41 -- # jq -r . 00:05:47.516 [2024-07-15 11:24:24.772207] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:05:47.516 [2024-07-15 11:24:24.772307] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63930 ] 00:05:47.516 [2024-07-15 11:24:24.907097] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:47.516 [2024-07-15 11:24:24.966756] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:47.775 11:24:24 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:47.775 11:24:24 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:47.775 11:24:24 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:47.775 11:24:24 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:47.775 11:24:24 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:47.775 11:24:24 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:47.775 11:24:24 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:47.775 11:24:24 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:47.775 11:24:24 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0x1 00:05:47.775 11:24:24 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:47.775 11:24:24 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:47.775 11:24:24 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:47.775 11:24:24 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:47.775 11:24:24 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:47.775 11:24:25 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:47.775 11:24:25 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:47.775 11:24:25 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:47.775 11:24:25 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:47.775 11:24:25 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:47.775 11:24:25 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:47.775 11:24:25 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=copy_crc32c 00:05:47.775 11:24:25 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:47.775 11:24:25 accel.accel_copy_crc32c -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:05:47.775 11:24:25 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:47.775 11:24:25 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:47.775 11:24:25 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0 00:05:47.775 11:24:25 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:47.775 11:24:25 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:47.775 11:24:25 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:47.775 11:24:25 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:47.775 11:24:25 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:47.775 11:24:25 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:47.775 11:24:25 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:47.775 11:24:25 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:47.775 11:24:25 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:47.775 11:24:25 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:47.775 11:24:25 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:47.775 11:24:25 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:47.775 11:24:25 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:47.775 11:24:25 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:47.775 11:24:25 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:47.775 11:24:25 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=software 00:05:47.775 11:24:25 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:47.775 11:24:25 accel.accel_copy_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:05:47.775 11:24:25 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:47.775 11:24:25 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:47.775 11:24:25 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:05:47.775 11:24:25 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:47.775 11:24:25 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:47.775 11:24:25 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:47.775 11:24:25 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:05:47.775 11:24:25 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:47.775 11:24:25 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:47.775 11:24:25 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:47.775 11:24:25 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=1 00:05:47.775 11:24:25 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:47.775 11:24:25 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:47.775 11:24:25 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:47.775 11:24:25 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:05:47.775 11:24:25 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:47.775 11:24:25 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:47.775 11:24:25 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:47.775 11:24:25 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=Yes 00:05:47.775 11:24:25 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:47.775 11:24:25 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:47.775 11:24:25 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:47.775 11:24:25 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:47.775 11:24:25 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:47.775 11:24:25 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:47.775 11:24:25 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:47.775 11:24:25 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:47.775 11:24:25 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:47.775 11:24:25 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:47.775 11:24:25 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:48.707 11:24:26 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:48.707 11:24:26 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:48.707 11:24:26 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:48.707 11:24:26 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:48.707 11:24:26 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:48.707 11:24:26 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:48.707 11:24:26 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:48.707 11:24:26 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:48.707 11:24:26 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:48.707 11:24:26 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:48.707 11:24:26 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:48.707 11:24:26 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:48.707 11:24:26 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:48.707 11:24:26 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:48.707 11:24:26 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:48.707 11:24:26 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:48.707 11:24:26 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:48.707 11:24:26 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:48.707 11:24:26 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:48.707 11:24:26 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:48.707 11:24:26 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:48.707 11:24:26 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:48.707 11:24:26 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:48.707 11:24:26 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:48.707 11:24:26 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:48.707 ************************************ 00:05:48.707 END TEST accel_copy_crc32c 00:05:48.707 ************************************ 00:05:48.707 11:24:26 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:05:48.707 11:24:26 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:48.707 00:05:48.707 real 0m1.375s 00:05:48.707 user 0m1.194s 00:05:48.707 sys 0m0.081s 00:05:48.707 11:24:26 accel.accel_copy_crc32c -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:48.707 11:24:26 accel.accel_copy_crc32c -- common/autotest_common.sh@10 -- # set +x 00:05:48.707 11:24:26 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:48.707 11:24:26 accel -- accel/accel.sh@106 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:05:48.707 11:24:26 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:05:48.707 11:24:26 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:48.707 11:24:26 accel -- common/autotest_common.sh@10 -- # set +x 00:05:48.707 ************************************ 00:05:48.707 START TEST accel_copy_crc32c_C2 00:05:48.707 ************************************ 00:05:48.707 11:24:26 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:05:48.707 11:24:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:05:48.707 11:24:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:05:48.707 11:24:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:48.707 11:24:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:48.707 11:24:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:05:48.707 11:24:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:05:48.707 11:24:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:05:48.707 11:24:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:48.707 11:24:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:48.707 11:24:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:48.707 11:24:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:48.707 11:24:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:48.707 11:24:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:05:48.707 11:24:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:05:48.965 [2024-07-15 11:24:26.189755] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:05:48.965 [2024-07-15 11:24:26.189867] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63964 ] 00:05:48.965 [2024-07-15 11:24:26.332870] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:48.965 [2024-07-15 11:24:26.394318] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:48.965 11:24:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:48.965 11:24:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:48.965 11:24:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:48.965 11:24:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:48.965 11:24:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:48.965 11:24:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:48.965 11:24:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:48.965 11:24:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:48.965 11:24:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:05:48.965 11:24:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:48.965 11:24:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:48.965 11:24:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:48.965 11:24:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:48.965 11:24:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:48.965 11:24:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:48.965 11:24:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:48.965 11:24:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:48.965 11:24:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:48.965 11:24:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:48.965 11:24:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:48.965 11:24:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=copy_crc32c 00:05:48.965 11:24:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:48.965 11:24:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:05:48.965 11:24:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:48.965 11:24:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:48.965 11:24:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:05:48.965 11:24:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:48.965 11:24:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:48.965 11:24:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:48.965 11:24:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:48.965 11:24:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:48.965 11:24:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:48.965 11:24:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:48.965 11:24:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='8192 bytes' 00:05:48.965 11:24:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:48.965 11:24:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:48.965 11:24:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:48.965 11:24:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:48.965 11:24:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:48.965 11:24:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:48.965 11:24:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:48.965 11:24:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:05:48.965 11:24:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:48.965 11:24:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:05:48.965 11:24:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:48.965 11:24:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:48.965 11:24:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:05:48.965 11:24:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:48.965 11:24:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:48.965 11:24:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:48.965 11:24:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:05:48.965 11:24:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:48.965 11:24:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:48.965 11:24:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:48.965 11:24:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:05:48.965 11:24:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:48.965 11:24:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:48.965 11:24:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:48.965 11:24:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:05:48.965 11:24:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:48.965 11:24:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:48.965 11:24:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:48.965 11:24:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:05:48.965 11:24:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:48.965 11:24:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:48.965 11:24:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:48.965 11:24:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:48.965 11:24:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:48.965 11:24:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:48.965 11:24:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:48.965 11:24:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:48.965 11:24:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:48.965 11:24:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:48.965 11:24:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:50.333 11:24:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:50.333 11:24:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:50.333 11:24:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:50.333 11:24:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:50.333 11:24:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:50.333 11:24:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:50.333 11:24:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:50.333 11:24:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:50.333 11:24:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:50.333 11:24:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:50.333 11:24:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:50.333 11:24:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:50.333 11:24:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:50.333 11:24:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:50.333 11:24:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:50.333 11:24:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:50.333 11:24:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:50.333 11:24:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:50.333 11:24:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:50.333 11:24:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:50.333 11:24:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:50.333 11:24:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:50.333 11:24:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:50.333 11:24:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:50.333 11:24:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:50.333 11:24:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:05:50.333 11:24:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:50.333 00:05:50.333 real 0m1.382s 00:05:50.333 user 0m1.209s 00:05:50.333 sys 0m0.076s 00:05:50.333 11:24:27 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:50.333 ************************************ 00:05:50.333 END TEST accel_copy_crc32c_C2 00:05:50.333 ************************************ 00:05:50.333 11:24:27 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:05:50.333 11:24:27 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:50.333 11:24:27 accel -- accel/accel.sh@107 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:05:50.333 11:24:27 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:05:50.333 11:24:27 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:50.334 11:24:27 accel -- common/autotest_common.sh@10 -- # set +x 00:05:50.334 ************************************ 00:05:50.334 START TEST accel_dualcast 00:05:50.334 ************************************ 00:05:50.334 11:24:27 accel.accel_dualcast -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dualcast -y 00:05:50.334 11:24:27 accel.accel_dualcast -- accel/accel.sh@16 -- # local accel_opc 00:05:50.334 11:24:27 accel.accel_dualcast -- accel/accel.sh@17 -- # local accel_module 00:05:50.334 11:24:27 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:50.334 11:24:27 accel.accel_dualcast -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:05:50.334 11:24:27 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:50.334 11:24:27 accel.accel_dualcast -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:05:50.334 11:24:27 accel.accel_dualcast -- accel/accel.sh@12 -- # build_accel_config 00:05:50.334 11:24:27 accel.accel_dualcast -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:50.334 11:24:27 accel.accel_dualcast -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:50.334 11:24:27 accel.accel_dualcast -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:50.334 11:24:27 accel.accel_dualcast -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:50.334 11:24:27 accel.accel_dualcast -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:50.334 11:24:27 accel.accel_dualcast -- accel/accel.sh@40 -- # local IFS=, 00:05:50.334 11:24:27 accel.accel_dualcast -- accel/accel.sh@41 -- # jq -r . 00:05:50.334 [2024-07-15 11:24:27.612764] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:05:50.334 [2024-07-15 11:24:27.612892] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63999 ] 00:05:50.334 [2024-07-15 11:24:27.755485] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:50.591 [2024-07-15 11:24:27.815102] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:50.591 11:24:27 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:50.591 11:24:27 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:50.591 11:24:27 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:50.591 11:24:27 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:50.591 11:24:27 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:50.591 11:24:27 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:50.591 11:24:27 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:50.591 11:24:27 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:50.591 11:24:27 accel.accel_dualcast -- accel/accel.sh@20 -- # val=0x1 00:05:50.591 11:24:27 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:50.591 11:24:27 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:50.591 11:24:27 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:50.591 11:24:27 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:50.591 11:24:27 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:50.591 11:24:27 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:50.591 11:24:27 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:50.591 11:24:27 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:50.591 11:24:27 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:50.591 11:24:27 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:50.591 11:24:27 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:50.591 11:24:27 accel.accel_dualcast -- accel/accel.sh@20 -- # val=dualcast 00:05:50.591 11:24:27 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:50.591 11:24:27 accel.accel_dualcast -- accel/accel.sh@23 -- # accel_opc=dualcast 00:05:50.591 11:24:27 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:50.591 11:24:27 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:50.591 11:24:27 accel.accel_dualcast -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:50.591 11:24:27 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:50.591 11:24:27 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:50.591 11:24:27 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:50.591 11:24:27 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:50.591 11:24:27 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:50.591 11:24:27 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:50.591 11:24:27 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:50.591 11:24:27 accel.accel_dualcast -- accel/accel.sh@20 -- # val=software 00:05:50.591 11:24:27 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:50.591 11:24:27 accel.accel_dualcast -- accel/accel.sh@22 -- # accel_module=software 00:05:50.591 11:24:27 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:50.591 11:24:27 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:50.591 11:24:27 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:05:50.591 11:24:27 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:50.591 11:24:27 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:50.591 11:24:27 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:50.591 11:24:27 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:05:50.591 11:24:27 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:50.591 11:24:27 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:50.591 11:24:27 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:50.591 11:24:27 accel.accel_dualcast -- accel/accel.sh@20 -- # val=1 00:05:50.591 11:24:27 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:50.591 11:24:27 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:50.591 11:24:27 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:50.591 11:24:27 accel.accel_dualcast -- accel/accel.sh@20 -- # val='1 seconds' 00:05:50.591 11:24:27 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:50.591 11:24:27 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:50.591 11:24:27 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:50.591 11:24:27 accel.accel_dualcast -- accel/accel.sh@20 -- # val=Yes 00:05:50.591 11:24:27 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:50.591 11:24:27 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:50.591 11:24:27 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:50.591 11:24:27 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:50.591 11:24:27 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:50.591 11:24:27 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:50.591 11:24:27 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:50.591 11:24:27 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:50.591 11:24:27 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:50.591 11:24:27 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:50.591 11:24:27 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:51.520 11:24:28 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:51.520 11:24:28 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:51.520 11:24:28 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:51.520 11:24:28 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:51.520 11:24:28 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:51.520 11:24:28 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:51.520 11:24:28 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:51.520 11:24:28 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:51.520 11:24:28 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:51.520 11:24:28 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:51.520 11:24:28 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:51.520 11:24:28 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:51.520 11:24:28 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:51.520 11:24:28 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:51.520 11:24:28 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:51.520 11:24:28 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:51.520 11:24:28 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:51.520 11:24:28 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:51.521 11:24:28 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:51.521 11:24:28 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:51.521 11:24:28 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:51.521 11:24:28 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:51.521 11:24:28 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:51.521 11:24:28 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:51.521 11:24:28 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:51.521 11:24:28 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n dualcast ]] 00:05:51.521 11:24:28 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:51.521 00:05:51.521 real 0m1.380s 00:05:51.521 user 0m1.206s 00:05:51.521 sys 0m0.075s 00:05:51.521 11:24:28 accel.accel_dualcast -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:51.521 11:24:28 accel.accel_dualcast -- common/autotest_common.sh@10 -- # set +x 00:05:51.521 ************************************ 00:05:51.521 END TEST accel_dualcast 00:05:51.521 ************************************ 00:05:51.852 11:24:28 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:51.852 11:24:28 accel -- accel/accel.sh@108 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:05:51.852 11:24:28 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:05:51.852 11:24:28 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:51.852 11:24:29 accel -- common/autotest_common.sh@10 -- # set +x 00:05:51.852 ************************************ 00:05:51.852 START TEST accel_compare 00:05:51.852 ************************************ 00:05:51.852 11:24:29 accel.accel_compare -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w compare -y 00:05:51.852 11:24:29 accel.accel_compare -- accel/accel.sh@16 -- # local accel_opc 00:05:51.852 11:24:29 accel.accel_compare -- accel/accel.sh@17 -- # local accel_module 00:05:51.852 11:24:29 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:51.852 11:24:29 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:51.852 11:24:29 accel.accel_compare -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:05:51.852 11:24:29 accel.accel_compare -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:05:51.852 11:24:29 accel.accel_compare -- accel/accel.sh@12 -- # build_accel_config 00:05:51.852 11:24:29 accel.accel_compare -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:51.852 11:24:29 accel.accel_compare -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:51.852 11:24:29 accel.accel_compare -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:51.852 11:24:29 accel.accel_compare -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:51.852 11:24:29 accel.accel_compare -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:51.852 11:24:29 accel.accel_compare -- accel/accel.sh@40 -- # local IFS=, 00:05:51.852 11:24:29 accel.accel_compare -- accel/accel.sh@41 -- # jq -r . 00:05:51.852 [2024-07-15 11:24:29.037892] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:05:51.852 [2024-07-15 11:24:29.038029] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64028 ] 00:05:51.852 [2024-07-15 11:24:29.174336] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:51.852 [2024-07-15 11:24:29.264940] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:52.124 11:24:29 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:52.124 11:24:29 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:52.124 11:24:29 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:52.124 11:24:29 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:52.124 11:24:29 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:52.124 11:24:29 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:52.124 11:24:29 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:52.125 11:24:29 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:52.125 11:24:29 accel.accel_compare -- accel/accel.sh@20 -- # val=0x1 00:05:52.125 11:24:29 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:52.125 11:24:29 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:52.125 11:24:29 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:52.125 11:24:29 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:52.125 11:24:29 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:52.125 11:24:29 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:52.125 11:24:29 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:52.125 11:24:29 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:52.125 11:24:29 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:52.125 11:24:29 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:52.125 11:24:29 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:52.125 11:24:29 accel.accel_compare -- accel/accel.sh@20 -- # val=compare 00:05:52.125 11:24:29 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:52.125 11:24:29 accel.accel_compare -- accel/accel.sh@23 -- # accel_opc=compare 00:05:52.125 11:24:29 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:52.125 11:24:29 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:52.125 11:24:29 accel.accel_compare -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:52.125 11:24:29 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:52.125 11:24:29 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:52.125 11:24:29 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:52.125 11:24:29 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:52.125 11:24:29 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:52.125 11:24:29 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:52.125 11:24:29 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:52.125 11:24:29 accel.accel_compare -- accel/accel.sh@20 -- # val=software 00:05:52.125 11:24:29 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:52.125 11:24:29 accel.accel_compare -- accel/accel.sh@22 -- # accel_module=software 00:05:52.125 11:24:29 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:52.125 11:24:29 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:52.125 11:24:29 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:05:52.125 11:24:29 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:52.125 11:24:29 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:52.125 11:24:29 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:52.125 11:24:29 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:05:52.125 11:24:29 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:52.125 11:24:29 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:52.125 11:24:29 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:52.125 11:24:29 accel.accel_compare -- accel/accel.sh@20 -- # val=1 00:05:52.125 11:24:29 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:52.125 11:24:29 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:52.125 11:24:29 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:52.125 11:24:29 accel.accel_compare -- accel/accel.sh@20 -- # val='1 seconds' 00:05:52.125 11:24:29 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:52.125 11:24:29 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:52.125 11:24:29 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:52.125 11:24:29 accel.accel_compare -- accel/accel.sh@20 -- # val=Yes 00:05:52.125 11:24:29 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:52.125 11:24:29 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:52.125 11:24:29 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:52.125 11:24:29 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:52.125 11:24:29 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:52.125 11:24:29 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:52.125 11:24:29 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:52.125 11:24:29 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:52.125 11:24:29 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:52.125 11:24:29 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:52.125 11:24:29 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:53.085 11:24:30 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:53.085 11:24:30 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:53.085 11:24:30 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:53.085 11:24:30 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:53.085 11:24:30 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:53.085 11:24:30 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:53.085 11:24:30 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:53.085 11:24:30 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:53.085 11:24:30 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:53.085 11:24:30 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:53.085 11:24:30 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:53.085 11:24:30 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:53.085 11:24:30 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:53.085 11:24:30 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:53.085 11:24:30 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:53.085 11:24:30 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:53.085 11:24:30 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:53.085 11:24:30 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:53.085 11:24:30 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:53.086 11:24:30 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:53.086 11:24:30 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:53.086 11:24:30 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:53.086 11:24:30 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:53.086 11:24:30 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:53.086 11:24:30 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:53.086 11:24:30 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n compare ]] 00:05:53.086 ************************************ 00:05:53.086 END TEST accel_compare 00:05:53.086 ************************************ 00:05:53.086 11:24:30 accel.accel_compare -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:53.086 00:05:53.086 real 0m1.420s 00:05:53.086 user 0m1.242s 00:05:53.086 sys 0m0.081s 00:05:53.086 11:24:30 accel.accel_compare -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:53.086 11:24:30 accel.accel_compare -- common/autotest_common.sh@10 -- # set +x 00:05:53.086 11:24:30 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:53.086 11:24:30 accel -- accel/accel.sh@109 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:05:53.086 11:24:30 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:05:53.086 11:24:30 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:53.086 11:24:30 accel -- common/autotest_common.sh@10 -- # set +x 00:05:53.086 ************************************ 00:05:53.086 START TEST accel_xor 00:05:53.086 ************************************ 00:05:53.086 11:24:30 accel.accel_xor -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w xor -y 00:05:53.086 11:24:30 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:05:53.086 11:24:30 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:05:53.086 11:24:30 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:53.086 11:24:30 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:05:53.086 11:24:30 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:53.086 11:24:30 accel.accel_xor -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:05:53.086 11:24:30 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:05:53.086 11:24:30 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:53.086 11:24:30 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:53.086 11:24:30 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:53.086 11:24:30 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:53.086 11:24:30 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:53.086 11:24:30 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:05:53.086 11:24:30 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:05:53.086 [2024-07-15 11:24:30.499974] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:05:53.086 [2024-07-15 11:24:30.500085] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64068 ] 00:05:53.345 [2024-07-15 11:24:30.637135] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:53.345 [2024-07-15 11:24:30.696030] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:53.345 11:24:30 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:53.345 11:24:30 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:53.345 11:24:30 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:53.345 11:24:30 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:53.345 11:24:30 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:53.345 11:24:30 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:53.345 11:24:30 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:53.345 11:24:30 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:53.345 11:24:30 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:05:53.345 11:24:30 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:53.345 11:24:30 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:53.345 11:24:30 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:53.345 11:24:30 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:53.345 11:24:30 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:53.345 11:24:30 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:53.345 11:24:30 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:53.345 11:24:30 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:53.345 11:24:30 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:53.345 11:24:30 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:53.345 11:24:30 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:53.345 11:24:30 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:05:53.345 11:24:30 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:53.345 11:24:30 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:05:53.345 11:24:30 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:53.345 11:24:30 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:53.345 11:24:30 accel.accel_xor -- accel/accel.sh@20 -- # val=2 00:05:53.345 11:24:30 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:53.345 11:24:30 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:53.345 11:24:30 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:53.345 11:24:30 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:53.345 11:24:30 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:53.345 11:24:30 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:53.345 11:24:30 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:53.345 11:24:30 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:53.345 11:24:30 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:53.345 11:24:30 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:53.345 11:24:30 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:53.345 11:24:30 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:05:53.345 11:24:30 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:53.345 11:24:30 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:05:53.345 11:24:30 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:53.345 11:24:30 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:53.345 11:24:30 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:05:53.345 11:24:30 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:53.345 11:24:30 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:53.345 11:24:30 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:53.345 11:24:30 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:05:53.345 11:24:30 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:53.345 11:24:30 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:53.345 11:24:30 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:53.345 11:24:30 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:05:53.345 11:24:30 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:53.345 11:24:30 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:53.345 11:24:30 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:53.345 11:24:30 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:05:53.345 11:24:30 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:53.345 11:24:30 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:53.345 11:24:30 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:53.345 11:24:30 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:05:53.345 11:24:30 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:53.345 11:24:30 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:53.345 11:24:30 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:53.345 11:24:30 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:53.345 11:24:30 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:53.345 11:24:30 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:53.345 11:24:30 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:53.345 11:24:30 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:53.345 11:24:30 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:53.345 11:24:30 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:53.345 11:24:30 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:54.723 11:24:31 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:54.723 11:24:31 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:54.723 11:24:31 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:54.723 11:24:31 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:54.723 11:24:31 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:54.723 11:24:31 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:54.723 11:24:31 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:54.723 11:24:31 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:54.723 11:24:31 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:54.723 11:24:31 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:54.723 11:24:31 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:54.723 11:24:31 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:54.723 11:24:31 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:54.723 11:24:31 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:54.723 11:24:31 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:54.723 11:24:31 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:54.723 11:24:31 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:54.723 11:24:31 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:54.723 11:24:31 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:54.723 11:24:31 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:54.723 11:24:31 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:54.723 ************************************ 00:05:54.723 END TEST accel_xor 00:05:54.723 ************************************ 00:05:54.723 11:24:31 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:54.723 11:24:31 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:54.723 11:24:31 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:54.723 11:24:31 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:54.723 11:24:31 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:05:54.723 11:24:31 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:54.723 00:05:54.723 real 0m1.371s 00:05:54.723 user 0m1.197s 00:05:54.723 sys 0m0.079s 00:05:54.723 11:24:31 accel.accel_xor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:54.723 11:24:31 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:05:54.723 11:24:31 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:54.723 11:24:31 accel -- accel/accel.sh@110 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:05:54.723 11:24:31 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:05:54.723 11:24:31 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:54.723 11:24:31 accel -- common/autotest_common.sh@10 -- # set +x 00:05:54.723 ************************************ 00:05:54.723 START TEST accel_xor 00:05:54.723 ************************************ 00:05:54.723 11:24:31 accel.accel_xor -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w xor -y -x 3 00:05:54.723 11:24:31 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:05:54.723 11:24:31 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:05:54.723 11:24:31 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:54.723 11:24:31 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:05:54.723 11:24:31 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:54.723 11:24:31 accel.accel_xor -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:05:54.723 11:24:31 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:05:54.723 11:24:31 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:54.723 11:24:31 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:54.723 11:24:31 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:54.723 11:24:31 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:54.723 11:24:31 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:54.723 11:24:31 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:05:54.723 11:24:31 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:05:54.723 [2024-07-15 11:24:31.920428] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:05:54.723 [2024-07-15 11:24:31.920539] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64097 ] 00:05:54.723 [2024-07-15 11:24:32.059456] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:54.723 [2024-07-15 11:24:32.134501] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:54.723 11:24:32 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:54.723 11:24:32 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:54.723 11:24:32 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:54.723 11:24:32 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:54.724 11:24:32 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:54.724 11:24:32 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:54.724 11:24:32 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:54.724 11:24:32 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:54.724 11:24:32 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:05:54.724 11:24:32 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:54.724 11:24:32 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:54.724 11:24:32 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:54.724 11:24:32 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:54.724 11:24:32 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:54.724 11:24:32 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:54.724 11:24:32 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:54.724 11:24:32 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:54.724 11:24:32 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:54.724 11:24:32 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:54.724 11:24:32 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:54.724 11:24:32 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:05:54.724 11:24:32 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:54.724 11:24:32 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:05:54.724 11:24:32 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:54.724 11:24:32 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:54.724 11:24:32 accel.accel_xor -- accel/accel.sh@20 -- # val=3 00:05:54.724 11:24:32 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:54.724 11:24:32 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:54.724 11:24:32 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:54.724 11:24:32 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:54.724 11:24:32 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:54.724 11:24:32 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:54.724 11:24:32 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:54.724 11:24:32 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:54.724 11:24:32 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:54.724 11:24:32 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:54.724 11:24:32 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:54.724 11:24:32 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:05:54.724 11:24:32 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:54.724 11:24:32 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:05:54.724 11:24:32 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:54.724 11:24:32 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:54.724 11:24:32 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:05:54.724 11:24:32 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:54.724 11:24:32 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:54.724 11:24:32 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:54.724 11:24:32 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:05:54.724 11:24:32 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:54.724 11:24:32 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:54.724 11:24:32 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:54.724 11:24:32 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:05:54.724 11:24:32 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:54.724 11:24:32 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:54.724 11:24:32 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:54.724 11:24:32 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:05:54.724 11:24:32 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:54.724 11:24:32 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:54.724 11:24:32 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:54.724 11:24:32 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:05:54.724 11:24:32 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:54.724 11:24:32 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:54.724 11:24:32 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:54.724 11:24:32 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:54.724 11:24:32 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:54.724 11:24:32 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:54.724 11:24:32 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:54.724 11:24:32 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:54.724 11:24:32 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:54.724 11:24:32 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:54.724 11:24:32 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:56.100 11:24:33 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:56.100 11:24:33 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:56.100 11:24:33 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:56.100 11:24:33 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:56.100 11:24:33 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:56.100 11:24:33 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:56.100 11:24:33 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:56.100 11:24:33 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:56.100 11:24:33 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:56.100 11:24:33 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:56.100 11:24:33 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:56.100 11:24:33 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:56.100 11:24:33 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:56.100 11:24:33 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:56.100 11:24:33 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:56.100 11:24:33 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:56.100 11:24:33 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:56.100 11:24:33 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:56.100 11:24:33 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:56.100 11:24:33 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:56.100 11:24:33 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:56.100 11:24:33 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:56.100 11:24:33 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:56.100 11:24:33 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:56.100 ************************************ 00:05:56.100 END TEST accel_xor 00:05:56.100 ************************************ 00:05:56.100 11:24:33 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:56.100 11:24:33 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:05:56.100 11:24:33 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:56.100 00:05:56.100 real 0m1.400s 00:05:56.100 user 0m1.226s 00:05:56.100 sys 0m0.077s 00:05:56.100 11:24:33 accel.accel_xor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:56.100 11:24:33 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:05:56.100 11:24:33 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:56.100 11:24:33 accel -- accel/accel.sh@111 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:05:56.100 11:24:33 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:05:56.100 11:24:33 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:56.100 11:24:33 accel -- common/autotest_common.sh@10 -- # set +x 00:05:56.100 ************************************ 00:05:56.100 START TEST accel_dif_verify 00:05:56.100 ************************************ 00:05:56.100 11:24:33 accel.accel_dif_verify -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_verify 00:05:56.100 11:24:33 accel.accel_dif_verify -- accel/accel.sh@16 -- # local accel_opc 00:05:56.100 11:24:33 accel.accel_dif_verify -- accel/accel.sh@17 -- # local accel_module 00:05:56.100 11:24:33 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:56.100 11:24:33 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:56.100 11:24:33 accel.accel_dif_verify -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:05:56.100 11:24:33 accel.accel_dif_verify -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:05:56.100 11:24:33 accel.accel_dif_verify -- accel/accel.sh@12 -- # build_accel_config 00:05:56.100 11:24:33 accel.accel_dif_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:56.100 11:24:33 accel.accel_dif_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:56.100 11:24:33 accel.accel_dif_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:56.100 11:24:33 accel.accel_dif_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:56.100 11:24:33 accel.accel_dif_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:56.100 11:24:33 accel.accel_dif_verify -- accel/accel.sh@40 -- # local IFS=, 00:05:56.100 11:24:33 accel.accel_dif_verify -- accel/accel.sh@41 -- # jq -r . 00:05:56.100 [2024-07-15 11:24:33.369175] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:05:56.100 [2024-07-15 11:24:33.369258] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64136 ] 00:05:56.100 [2024-07-15 11:24:33.504609] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:56.100 [2024-07-15 11:24:33.567549] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:56.359 11:24:33 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:56.359 11:24:33 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:56.359 11:24:33 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:56.359 11:24:33 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:56.359 11:24:33 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:56.359 11:24:33 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:56.359 11:24:33 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:56.359 11:24:33 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:56.359 11:24:33 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=0x1 00:05:56.359 11:24:33 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:56.359 11:24:33 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:56.359 11:24:33 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:56.359 11:24:33 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:56.359 11:24:33 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:56.359 11:24:33 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:56.359 11:24:33 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:56.359 11:24:33 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:56.359 11:24:33 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:56.359 11:24:33 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:56.359 11:24:33 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:56.359 11:24:33 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=dif_verify 00:05:56.359 11:24:33 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:56.359 11:24:33 accel.accel_dif_verify -- accel/accel.sh@23 -- # accel_opc=dif_verify 00:05:56.359 11:24:33 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:56.359 11:24:33 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:56.359 11:24:33 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:56.359 11:24:33 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:56.359 11:24:33 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:56.359 11:24:33 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:56.359 11:24:33 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:56.359 11:24:33 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:56.359 11:24:33 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:56.359 11:24:33 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:56.359 11:24:33 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='512 bytes' 00:05:56.359 11:24:33 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:56.359 11:24:33 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:56.359 11:24:33 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:56.359 11:24:33 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='8 bytes' 00:05:56.359 11:24:33 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:56.359 11:24:33 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:56.359 11:24:33 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:56.359 11:24:33 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:56.359 11:24:33 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:56.359 11:24:33 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:56.359 11:24:33 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:56.359 11:24:33 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=software 00:05:56.359 11:24:33 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:56.359 11:24:33 accel.accel_dif_verify -- accel/accel.sh@22 -- # accel_module=software 00:05:56.359 11:24:33 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:56.359 11:24:33 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:56.359 11:24:33 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:05:56.359 11:24:33 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:56.359 11:24:33 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:56.359 11:24:33 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:56.359 11:24:33 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:05:56.359 11:24:33 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:56.359 11:24:33 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:56.359 11:24:33 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:56.359 11:24:33 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=1 00:05:56.359 11:24:33 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:56.359 11:24:33 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:56.359 11:24:33 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:56.359 11:24:33 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='1 seconds' 00:05:56.359 11:24:33 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:56.359 11:24:33 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:56.359 11:24:33 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:56.359 11:24:33 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=No 00:05:56.359 11:24:33 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:56.359 11:24:33 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:56.359 11:24:33 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:56.359 11:24:33 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:56.359 11:24:33 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:56.359 11:24:33 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:56.359 11:24:33 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:56.359 11:24:33 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:56.359 11:24:33 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:56.359 11:24:33 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:56.359 11:24:33 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:57.291 11:24:34 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:57.291 11:24:34 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:57.291 11:24:34 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:57.291 11:24:34 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:57.291 11:24:34 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:57.291 11:24:34 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:57.291 11:24:34 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:57.291 11:24:34 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:57.291 11:24:34 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:57.291 11:24:34 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:57.291 11:24:34 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:57.291 11:24:34 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:57.291 11:24:34 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:57.291 11:24:34 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:57.291 11:24:34 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:57.291 11:24:34 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:57.291 11:24:34 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:57.291 11:24:34 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:57.291 11:24:34 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:57.291 11:24:34 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:57.291 ************************************ 00:05:57.291 END TEST accel_dif_verify 00:05:57.291 ************************************ 00:05:57.291 11:24:34 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:57.291 11:24:34 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:57.291 11:24:34 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:57.291 11:24:34 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:57.291 11:24:34 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:57.291 11:24:34 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n dif_verify ]] 00:05:57.291 11:24:34 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:57.291 00:05:57.291 real 0m1.380s 00:05:57.291 user 0m1.209s 00:05:57.291 sys 0m0.073s 00:05:57.291 11:24:34 accel.accel_dif_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:57.291 11:24:34 accel.accel_dif_verify -- common/autotest_common.sh@10 -- # set +x 00:05:57.291 11:24:34 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:57.291 11:24:34 accel -- accel/accel.sh@112 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:05:57.291 11:24:34 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:05:57.291 11:24:34 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:57.291 11:24:34 accel -- common/autotest_common.sh@10 -- # set +x 00:05:57.554 ************************************ 00:05:57.554 START TEST accel_dif_generate 00:05:57.554 ************************************ 00:05:57.554 11:24:34 accel.accel_dif_generate -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_generate 00:05:57.554 11:24:34 accel.accel_dif_generate -- accel/accel.sh@16 -- # local accel_opc 00:05:57.554 11:24:34 accel.accel_dif_generate -- accel/accel.sh@17 -- # local accel_module 00:05:57.554 11:24:34 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:57.554 11:24:34 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:57.554 11:24:34 accel.accel_dif_generate -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:05:57.554 11:24:34 accel.accel_dif_generate -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:05:57.554 11:24:34 accel.accel_dif_generate -- accel/accel.sh@12 -- # build_accel_config 00:05:57.554 11:24:34 accel.accel_dif_generate -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:57.554 11:24:34 accel.accel_dif_generate -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:57.554 11:24:34 accel.accel_dif_generate -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:57.554 11:24:34 accel.accel_dif_generate -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:57.554 11:24:34 accel.accel_dif_generate -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:57.554 11:24:34 accel.accel_dif_generate -- accel/accel.sh@40 -- # local IFS=, 00:05:57.554 11:24:34 accel.accel_dif_generate -- accel/accel.sh@41 -- # jq -r . 00:05:57.554 [2024-07-15 11:24:34.799031] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:05:57.554 [2024-07-15 11:24:34.799121] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64168 ] 00:05:57.554 [2024-07-15 11:24:34.935481] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:57.554 [2024-07-15 11:24:35.011512] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:57.838 11:24:35 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:57.838 11:24:35 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:57.838 11:24:35 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:57.838 11:24:35 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:57.838 11:24:35 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:57.838 11:24:35 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:57.838 11:24:35 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:57.838 11:24:35 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:57.838 11:24:35 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=0x1 00:05:57.838 11:24:35 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:57.838 11:24:35 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:57.838 11:24:35 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:57.838 11:24:35 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:57.838 11:24:35 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:57.838 11:24:35 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:57.838 11:24:35 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:57.838 11:24:35 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:57.838 11:24:35 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:57.838 11:24:35 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:57.838 11:24:35 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:57.838 11:24:35 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=dif_generate 00:05:57.838 11:24:35 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:57.838 11:24:35 accel.accel_dif_generate -- accel/accel.sh@23 -- # accel_opc=dif_generate 00:05:57.838 11:24:35 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:57.838 11:24:35 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:57.838 11:24:35 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:57.838 11:24:35 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:57.838 11:24:35 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:57.838 11:24:35 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:57.838 11:24:35 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:57.838 11:24:35 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:57.838 11:24:35 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:57.838 11:24:35 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:57.838 11:24:35 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='512 bytes' 00:05:57.838 11:24:35 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:57.838 11:24:35 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:57.838 11:24:35 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:57.838 11:24:35 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='8 bytes' 00:05:57.838 11:24:35 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:57.838 11:24:35 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:57.838 11:24:35 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:57.838 11:24:35 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:57.838 11:24:35 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:57.838 11:24:35 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:57.838 11:24:35 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:57.838 11:24:35 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=software 00:05:57.838 11:24:35 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:57.838 11:24:35 accel.accel_dif_generate -- accel/accel.sh@22 -- # accel_module=software 00:05:57.838 11:24:35 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:57.838 11:24:35 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:57.838 11:24:35 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:05:57.838 11:24:35 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:57.838 11:24:35 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:57.838 11:24:35 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:57.838 11:24:35 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:05:57.838 11:24:35 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:57.838 11:24:35 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:57.838 11:24:35 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:57.838 11:24:35 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=1 00:05:57.838 11:24:35 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:57.838 11:24:35 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:57.838 11:24:35 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:57.838 11:24:35 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='1 seconds' 00:05:57.838 11:24:35 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:57.838 11:24:35 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:57.838 11:24:35 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:57.838 11:24:35 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=No 00:05:57.838 11:24:35 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:57.838 11:24:35 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:57.838 11:24:35 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:57.838 11:24:35 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:57.838 11:24:35 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:57.838 11:24:35 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:57.838 11:24:35 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:57.838 11:24:35 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:57.838 11:24:35 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:57.838 11:24:35 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:57.838 11:24:35 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:58.772 11:24:36 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:58.772 11:24:36 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:58.772 11:24:36 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:58.772 11:24:36 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:58.772 11:24:36 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:58.772 11:24:36 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:58.772 11:24:36 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:58.772 11:24:36 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:58.772 11:24:36 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:58.772 11:24:36 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:58.772 11:24:36 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:58.772 11:24:36 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:58.772 11:24:36 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:58.772 11:24:36 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:58.772 11:24:36 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:58.772 11:24:36 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:58.772 11:24:36 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:58.772 11:24:36 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:58.772 11:24:36 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:58.772 11:24:36 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:58.772 11:24:36 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:58.772 11:24:36 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:58.772 11:24:36 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:58.772 11:24:36 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:58.772 11:24:36 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:58.772 11:24:36 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n dif_generate ]] 00:05:58.772 11:24:36 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:58.772 00:05:58.772 real 0m1.397s 00:05:58.772 user 0m1.218s 00:05:58.772 sys 0m0.082s 00:05:58.772 11:24:36 accel.accel_dif_generate -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:58.772 11:24:36 accel.accel_dif_generate -- common/autotest_common.sh@10 -- # set +x 00:05:58.772 ************************************ 00:05:58.772 END TEST accel_dif_generate 00:05:58.772 ************************************ 00:05:58.772 11:24:36 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:58.772 11:24:36 accel -- accel/accel.sh@113 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:05:58.772 11:24:36 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:05:58.772 11:24:36 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:58.772 11:24:36 accel -- common/autotest_common.sh@10 -- # set +x 00:05:58.772 ************************************ 00:05:58.772 START TEST accel_dif_generate_copy 00:05:58.772 ************************************ 00:05:58.772 11:24:36 accel.accel_dif_generate_copy -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_generate_copy 00:05:58.772 11:24:36 accel.accel_dif_generate_copy -- accel/accel.sh@16 -- # local accel_opc 00:05:58.772 11:24:36 accel.accel_dif_generate_copy -- accel/accel.sh@17 -- # local accel_module 00:05:58.772 11:24:36 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:58.772 11:24:36 accel.accel_dif_generate_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:05:58.772 11:24:36 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:58.772 11:24:36 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:05:58.772 11:24:36 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # build_accel_config 00:05:58.772 11:24:36 accel.accel_dif_generate_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:58.772 11:24:36 accel.accel_dif_generate_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:58.772 11:24:36 accel.accel_dif_generate_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:58.772 11:24:36 accel.accel_dif_generate_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:58.772 11:24:36 accel.accel_dif_generate_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:58.772 11:24:36 accel.accel_dif_generate_copy -- accel/accel.sh@40 -- # local IFS=, 00:05:58.772 11:24:36 accel.accel_dif_generate_copy -- accel/accel.sh@41 -- # jq -r . 00:05:58.772 [2024-07-15 11:24:36.245980] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:05:58.772 [2024-07-15 11:24:36.246075] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64197 ] 00:05:59.030 [2024-07-15 11:24:36.381625] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:59.030 [2024-07-15 11:24:36.445308] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:59.030 11:24:36 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:59.030 11:24:36 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:59.030 11:24:36 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:59.030 11:24:36 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:59.030 11:24:36 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:59.030 11:24:36 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:59.030 11:24:36 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:59.030 11:24:36 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:59.030 11:24:36 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=0x1 00:05:59.030 11:24:36 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:59.030 11:24:36 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:59.030 11:24:36 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:59.030 11:24:36 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:59.030 11:24:36 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:59.030 11:24:36 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:59.030 11:24:36 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:59.030 11:24:36 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:59.030 11:24:36 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:59.030 11:24:36 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:59.030 11:24:36 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:59.030 11:24:36 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=dif_generate_copy 00:05:59.030 11:24:36 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:59.030 11:24:36 accel.accel_dif_generate_copy -- accel/accel.sh@23 -- # accel_opc=dif_generate_copy 00:05:59.030 11:24:36 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:59.030 11:24:36 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:59.030 11:24:36 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:59.030 11:24:36 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:59.030 11:24:36 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:59.030 11:24:36 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:59.030 11:24:36 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:59.030 11:24:36 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:59.030 11:24:36 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:59.030 11:24:36 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:59.030 11:24:36 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:59.030 11:24:36 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:59.030 11:24:36 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:59.030 11:24:36 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:59.030 11:24:36 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=software 00:05:59.030 11:24:36 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:59.030 11:24:36 accel.accel_dif_generate_copy -- accel/accel.sh@22 -- # accel_module=software 00:05:59.030 11:24:36 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:59.030 11:24:36 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:59.030 11:24:36 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:05:59.030 11:24:36 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:59.030 11:24:36 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:59.030 11:24:36 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:59.030 11:24:36 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:05:59.030 11:24:36 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:59.030 11:24:36 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:59.030 11:24:36 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:59.030 11:24:36 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=1 00:05:59.030 11:24:36 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:59.030 11:24:36 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:59.031 11:24:36 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:59.031 11:24:36 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:05:59.031 11:24:36 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:59.031 11:24:36 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:59.031 11:24:36 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:59.031 11:24:36 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=No 00:05:59.031 11:24:36 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:59.031 11:24:36 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:59.031 11:24:36 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:59.031 11:24:36 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:59.031 11:24:36 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:59.031 11:24:36 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:59.031 11:24:36 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:59.031 11:24:36 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:59.031 11:24:36 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:59.031 11:24:36 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:59.031 11:24:36 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:00.451 11:24:37 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:00.451 11:24:37 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:00.451 11:24:37 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:00.451 11:24:37 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:00.451 11:24:37 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:00.451 11:24:37 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:00.451 11:24:37 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:00.451 11:24:37 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:00.451 11:24:37 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:00.451 11:24:37 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:00.451 11:24:37 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:00.451 11:24:37 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:00.451 11:24:37 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:00.451 11:24:37 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:00.451 11:24:37 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:00.451 11:24:37 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:00.451 ************************************ 00:06:00.451 END TEST accel_dif_generate_copy 00:06:00.451 ************************************ 00:06:00.451 11:24:37 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:00.451 11:24:37 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:00.451 11:24:37 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:00.451 11:24:37 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:00.451 11:24:37 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:00.451 11:24:37 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:00.451 11:24:37 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:00.451 11:24:37 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:00.451 11:24:37 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:00.451 11:24:37 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n dif_generate_copy ]] 00:06:00.451 11:24:37 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:00.451 00:06:00.451 real 0m1.377s 00:06:00.451 user 0m1.203s 00:06:00.451 sys 0m0.079s 00:06:00.451 11:24:37 accel.accel_dif_generate_copy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:00.451 11:24:37 accel.accel_dif_generate_copy -- common/autotest_common.sh@10 -- # set +x 00:06:00.451 11:24:37 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:00.451 11:24:37 accel -- accel/accel.sh@115 -- # [[ y == y ]] 00:06:00.451 11:24:37 accel -- accel/accel.sh@116 -- # run_test accel_comp accel_test -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:00.451 11:24:37 accel -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:06:00.451 11:24:37 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:00.451 11:24:37 accel -- common/autotest_common.sh@10 -- # set +x 00:06:00.451 ************************************ 00:06:00.451 START TEST accel_comp 00:06:00.451 ************************************ 00:06:00.451 11:24:37 accel.accel_comp -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:00.451 11:24:37 accel.accel_comp -- accel/accel.sh@16 -- # local accel_opc 00:06:00.451 11:24:37 accel.accel_comp -- accel/accel.sh@17 -- # local accel_module 00:06:00.451 11:24:37 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:00.451 11:24:37 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:00.451 11:24:37 accel.accel_comp -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:00.451 11:24:37 accel.accel_comp -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:00.451 11:24:37 accel.accel_comp -- accel/accel.sh@12 -- # build_accel_config 00:06:00.451 11:24:37 accel.accel_comp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:00.451 11:24:37 accel.accel_comp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:00.451 11:24:37 accel.accel_comp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:00.451 11:24:37 accel.accel_comp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:00.451 11:24:37 accel.accel_comp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:00.451 11:24:37 accel.accel_comp -- accel/accel.sh@40 -- # local IFS=, 00:06:00.451 11:24:37 accel.accel_comp -- accel/accel.sh@41 -- # jq -r . 00:06:00.451 [2024-07-15 11:24:37.675950] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:06:00.451 [2024-07-15 11:24:37.676680] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64237 ] 00:06:00.451 [2024-07-15 11:24:37.816165] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:00.451 [2024-07-15 11:24:37.887432] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:00.709 11:24:37 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:00.709 11:24:37 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:00.709 11:24:37 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:00.709 11:24:37 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:00.709 11:24:37 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:00.709 11:24:37 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:00.709 11:24:37 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:00.709 11:24:37 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:00.709 11:24:37 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:00.709 11:24:37 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:00.709 11:24:37 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:00.709 11:24:37 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:00.709 11:24:37 accel.accel_comp -- accel/accel.sh@20 -- # val=0x1 00:06:00.709 11:24:37 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:00.709 11:24:37 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:00.709 11:24:37 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:00.709 11:24:37 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:00.709 11:24:37 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:00.709 11:24:37 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:00.709 11:24:37 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:00.709 11:24:37 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:00.709 11:24:37 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:00.709 11:24:37 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:00.709 11:24:37 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:00.709 11:24:37 accel.accel_comp -- accel/accel.sh@20 -- # val=compress 00:06:00.709 11:24:37 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:00.709 11:24:37 accel.accel_comp -- accel/accel.sh@23 -- # accel_opc=compress 00:06:00.709 11:24:37 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:00.709 11:24:37 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:00.709 11:24:37 accel.accel_comp -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:00.709 11:24:37 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:00.709 11:24:37 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:00.709 11:24:37 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:00.709 11:24:37 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:00.709 11:24:37 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:00.709 11:24:37 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:00.709 11:24:37 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:00.709 11:24:37 accel.accel_comp -- accel/accel.sh@20 -- # val=software 00:06:00.709 11:24:37 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:00.709 11:24:37 accel.accel_comp -- accel/accel.sh@22 -- # accel_module=software 00:06:00.709 11:24:37 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:00.709 11:24:37 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:00.709 11:24:37 accel.accel_comp -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:00.709 11:24:37 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:00.709 11:24:37 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:00.709 11:24:37 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:00.709 11:24:37 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:06:00.709 11:24:37 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:00.709 11:24:37 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:00.709 11:24:37 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:00.709 11:24:37 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:06:00.709 11:24:37 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:00.709 11:24:37 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:00.709 11:24:37 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:00.709 11:24:37 accel.accel_comp -- accel/accel.sh@20 -- # val=1 00:06:00.709 11:24:37 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:00.709 11:24:37 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:00.709 11:24:37 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:00.709 11:24:37 accel.accel_comp -- accel/accel.sh@20 -- # val='1 seconds' 00:06:00.709 11:24:37 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:00.709 11:24:37 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:00.709 11:24:37 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:00.709 11:24:37 accel.accel_comp -- accel/accel.sh@20 -- # val=No 00:06:00.709 11:24:37 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:00.709 11:24:37 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:00.709 11:24:37 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:00.709 11:24:37 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:00.709 11:24:37 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:00.709 11:24:37 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:00.709 11:24:37 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:00.709 11:24:37 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:00.709 11:24:37 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:00.709 11:24:37 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:00.709 11:24:37 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:01.643 11:24:39 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:01.643 11:24:39 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:01.643 11:24:39 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:01.643 11:24:39 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:01.643 11:24:39 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:01.643 11:24:39 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:01.643 11:24:39 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:01.643 11:24:39 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:01.643 11:24:39 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:01.643 11:24:39 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:01.643 11:24:39 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:01.643 11:24:39 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:01.643 11:24:39 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:01.643 11:24:39 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:01.643 11:24:39 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:01.643 11:24:39 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:01.643 11:24:39 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:01.643 ************************************ 00:06:01.643 END TEST accel_comp 00:06:01.643 ************************************ 00:06:01.643 11:24:39 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:01.643 11:24:39 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:01.643 11:24:39 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:01.643 11:24:39 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:01.643 11:24:39 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:01.643 11:24:39 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:01.643 11:24:39 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:01.643 11:24:39 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:01.643 11:24:39 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n compress ]] 00:06:01.643 11:24:39 accel.accel_comp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:01.643 00:06:01.643 real 0m1.399s 00:06:01.643 user 0m1.229s 00:06:01.643 sys 0m0.077s 00:06:01.643 11:24:39 accel.accel_comp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:01.643 11:24:39 accel.accel_comp -- common/autotest_common.sh@10 -- # set +x 00:06:01.643 11:24:39 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:01.643 11:24:39 accel -- accel/accel.sh@117 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:01.643 11:24:39 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:06:01.643 11:24:39 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:01.643 11:24:39 accel -- common/autotest_common.sh@10 -- # set +x 00:06:01.643 ************************************ 00:06:01.643 START TEST accel_decomp 00:06:01.643 ************************************ 00:06:01.643 11:24:39 accel.accel_decomp -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:01.643 11:24:39 accel.accel_decomp -- accel/accel.sh@16 -- # local accel_opc 00:06:01.643 11:24:39 accel.accel_decomp -- accel/accel.sh@17 -- # local accel_module 00:06:01.643 11:24:39 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:01.643 11:24:39 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:01.643 11:24:39 accel.accel_decomp -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:01.643 11:24:39 accel.accel_decomp -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:01.643 11:24:39 accel.accel_decomp -- accel/accel.sh@12 -- # build_accel_config 00:06:01.643 11:24:39 accel.accel_decomp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:01.643 11:24:39 accel.accel_decomp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:01.643 11:24:39 accel.accel_decomp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:01.643 11:24:39 accel.accel_decomp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:01.643 11:24:39 accel.accel_decomp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:01.643 11:24:39 accel.accel_decomp -- accel/accel.sh@40 -- # local IFS=, 00:06:01.643 11:24:39 accel.accel_decomp -- accel/accel.sh@41 -- # jq -r . 00:06:01.901 [2024-07-15 11:24:39.129020] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:06:01.901 [2024-07-15 11:24:39.129118] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64266 ] 00:06:01.901 [2024-07-15 11:24:39.267423] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:01.901 [2024-07-15 11:24:39.340215] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:02.160 11:24:39 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:02.160 11:24:39 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:02.160 11:24:39 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:02.160 11:24:39 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:02.160 11:24:39 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:02.160 11:24:39 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:02.160 11:24:39 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:02.160 11:24:39 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:02.160 11:24:39 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:02.160 11:24:39 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:02.160 11:24:39 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:02.160 11:24:39 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:02.160 11:24:39 accel.accel_decomp -- accel/accel.sh@20 -- # val=0x1 00:06:02.160 11:24:39 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:02.160 11:24:39 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:02.160 11:24:39 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:02.160 11:24:39 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:02.160 11:24:39 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:02.160 11:24:39 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:02.160 11:24:39 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:02.160 11:24:39 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:02.160 11:24:39 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:02.160 11:24:39 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:02.160 11:24:39 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:02.160 11:24:39 accel.accel_decomp -- accel/accel.sh@20 -- # val=decompress 00:06:02.160 11:24:39 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:02.160 11:24:39 accel.accel_decomp -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:02.160 11:24:39 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:02.160 11:24:39 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:02.160 11:24:39 accel.accel_decomp -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:02.160 11:24:39 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:02.160 11:24:39 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:02.160 11:24:39 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:02.160 11:24:39 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:02.160 11:24:39 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:02.160 11:24:39 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:02.160 11:24:39 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:02.160 11:24:39 accel.accel_decomp -- accel/accel.sh@20 -- # val=software 00:06:02.160 11:24:39 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:02.160 11:24:39 accel.accel_decomp -- accel/accel.sh@22 -- # accel_module=software 00:06:02.160 11:24:39 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:02.160 11:24:39 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:02.160 11:24:39 accel.accel_decomp -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:02.160 11:24:39 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:02.160 11:24:39 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:02.160 11:24:39 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:02.160 11:24:39 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:06:02.160 11:24:39 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:02.160 11:24:39 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:02.160 11:24:39 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:02.160 11:24:39 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:06:02.160 11:24:39 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:02.160 11:24:39 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:02.160 11:24:39 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:02.160 11:24:39 accel.accel_decomp -- accel/accel.sh@20 -- # val=1 00:06:02.160 11:24:39 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:02.160 11:24:39 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:02.160 11:24:39 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:02.160 11:24:39 accel.accel_decomp -- accel/accel.sh@20 -- # val='1 seconds' 00:06:02.160 11:24:39 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:02.160 11:24:39 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:02.160 11:24:39 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:02.160 11:24:39 accel.accel_decomp -- accel/accel.sh@20 -- # val=Yes 00:06:02.160 11:24:39 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:02.160 11:24:39 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:02.160 11:24:39 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:02.160 11:24:39 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:02.160 11:24:39 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:02.160 11:24:39 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:02.160 11:24:39 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:02.160 11:24:39 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:02.160 11:24:39 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:02.160 11:24:39 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:02.160 11:24:39 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:03.095 11:24:40 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:03.095 11:24:40 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:03.095 11:24:40 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:03.095 11:24:40 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:03.095 11:24:40 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:03.095 11:24:40 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:03.095 11:24:40 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:03.095 11:24:40 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:03.095 11:24:40 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:03.095 11:24:40 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:03.095 11:24:40 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:03.095 11:24:40 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:03.095 11:24:40 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:03.095 11:24:40 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:03.095 11:24:40 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:03.095 11:24:40 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:03.095 11:24:40 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:03.095 11:24:40 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:03.095 11:24:40 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:03.095 11:24:40 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:03.095 11:24:40 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:03.095 11:24:40 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:03.095 11:24:40 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:03.095 11:24:40 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:03.095 11:24:40 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:03.095 11:24:40 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:03.095 ************************************ 00:06:03.095 END TEST accel_decomp 00:06:03.095 ************************************ 00:06:03.095 11:24:40 accel.accel_decomp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:03.095 00:06:03.095 real 0m1.394s 00:06:03.095 user 0m1.226s 00:06:03.095 sys 0m0.070s 00:06:03.095 11:24:40 accel.accel_decomp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:03.095 11:24:40 accel.accel_decomp -- common/autotest_common.sh@10 -- # set +x 00:06:03.095 11:24:40 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:03.095 11:24:40 accel -- accel/accel.sh@118 -- # run_test accel_decomp_full accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:06:03.095 11:24:40 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:06:03.095 11:24:40 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:03.095 11:24:40 accel -- common/autotest_common.sh@10 -- # set +x 00:06:03.095 ************************************ 00:06:03.095 START TEST accel_decomp_full 00:06:03.095 ************************************ 00:06:03.095 11:24:40 accel.accel_decomp_full -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:06:03.095 11:24:40 accel.accel_decomp_full -- accel/accel.sh@16 -- # local accel_opc 00:06:03.095 11:24:40 accel.accel_decomp_full -- accel/accel.sh@17 -- # local accel_module 00:06:03.095 11:24:40 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:03.095 11:24:40 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:03.095 11:24:40 accel.accel_decomp_full -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:06:03.095 11:24:40 accel.accel_decomp_full -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:06:03.095 11:24:40 accel.accel_decomp_full -- accel/accel.sh@12 -- # build_accel_config 00:06:03.096 11:24:40 accel.accel_decomp_full -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:03.096 11:24:40 accel.accel_decomp_full -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:03.096 11:24:40 accel.accel_decomp_full -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:03.096 11:24:40 accel.accel_decomp_full -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:03.096 11:24:40 accel.accel_decomp_full -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:03.096 11:24:40 accel.accel_decomp_full -- accel/accel.sh@40 -- # local IFS=, 00:06:03.096 11:24:40 accel.accel_decomp_full -- accel/accel.sh@41 -- # jq -r . 00:06:03.398 [2024-07-15 11:24:40.574401] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:06:03.398 [2024-07-15 11:24:40.575112] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64301 ] 00:06:03.398 [2024-07-15 11:24:40.714488] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:03.398 [2024-07-15 11:24:40.784538] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:03.398 11:24:40 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:03.398 11:24:40 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:03.398 11:24:40 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:03.398 11:24:40 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:03.398 11:24:40 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:03.398 11:24:40 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:03.398 11:24:40 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:03.398 11:24:40 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:03.398 11:24:40 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:03.398 11:24:40 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:03.398 11:24:40 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:03.398 11:24:40 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:03.398 11:24:40 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=0x1 00:06:03.398 11:24:40 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:03.398 11:24:40 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:03.398 11:24:40 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:03.398 11:24:40 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:03.398 11:24:40 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:03.398 11:24:40 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:03.398 11:24:40 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:03.398 11:24:40 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:03.398 11:24:40 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:03.398 11:24:40 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:03.398 11:24:40 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:03.398 11:24:40 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=decompress 00:06:03.398 11:24:40 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:03.398 11:24:40 accel.accel_decomp_full -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:03.398 11:24:40 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:03.398 11:24:40 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:03.398 11:24:40 accel.accel_decomp_full -- accel/accel.sh@20 -- # val='111250 bytes' 00:06:03.398 11:24:40 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:03.398 11:24:40 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:03.398 11:24:40 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:03.398 11:24:40 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:03.398 11:24:40 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:03.398 11:24:40 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:03.398 11:24:40 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:03.398 11:24:40 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=software 00:06:03.398 11:24:40 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:03.398 11:24:40 accel.accel_decomp_full -- accel/accel.sh@22 -- # accel_module=software 00:06:03.398 11:24:40 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:03.398 11:24:40 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:03.398 11:24:40 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:03.398 11:24:40 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:03.398 11:24:40 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:03.398 11:24:40 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:03.398 11:24:40 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=32 00:06:03.398 11:24:40 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:03.398 11:24:40 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:03.398 11:24:40 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:03.398 11:24:40 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=32 00:06:03.398 11:24:40 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:03.398 11:24:40 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:03.398 11:24:40 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:03.398 11:24:40 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=1 00:06:03.398 11:24:40 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:03.398 11:24:40 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:03.398 11:24:40 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:03.398 11:24:40 accel.accel_decomp_full -- accel/accel.sh@20 -- # val='1 seconds' 00:06:03.398 11:24:40 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:03.398 11:24:40 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:03.398 11:24:40 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:03.398 11:24:40 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=Yes 00:06:03.398 11:24:40 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:03.398 11:24:40 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:03.398 11:24:40 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:03.398 11:24:40 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:03.398 11:24:40 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:03.398 11:24:40 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:03.398 11:24:40 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:03.398 11:24:40 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:03.398 11:24:40 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:03.399 11:24:40 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:03.399 11:24:40 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:04.778 11:24:41 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:04.778 11:24:41 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:04.778 11:24:41 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:04.778 11:24:41 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:04.778 11:24:41 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:04.778 11:24:41 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:04.778 11:24:41 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:04.778 11:24:41 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:04.778 11:24:41 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:04.778 11:24:41 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:04.778 11:24:41 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:04.779 11:24:41 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:04.779 11:24:41 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:04.779 11:24:41 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:04.779 11:24:41 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:04.779 11:24:41 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:04.779 11:24:41 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:04.779 11:24:41 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:04.779 11:24:41 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:04.779 11:24:41 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:04.779 ************************************ 00:06:04.779 END TEST accel_decomp_full 00:06:04.779 ************************************ 00:06:04.779 11:24:41 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:04.779 11:24:41 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:04.779 11:24:41 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:04.779 11:24:41 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:04.779 11:24:41 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:04.779 11:24:41 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:04.779 11:24:41 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:04.779 00:06:04.779 real 0m1.408s 00:06:04.779 user 0m1.234s 00:06:04.779 sys 0m0.081s 00:06:04.779 11:24:41 accel.accel_decomp_full -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:04.779 11:24:41 accel.accel_decomp_full -- common/autotest_common.sh@10 -- # set +x 00:06:04.779 11:24:41 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:04.779 11:24:41 accel -- accel/accel.sh@119 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:06:04.779 11:24:41 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:06:04.779 11:24:41 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:04.779 11:24:41 accel -- common/autotest_common.sh@10 -- # set +x 00:06:04.779 ************************************ 00:06:04.779 START TEST accel_decomp_mcore 00:06:04.779 ************************************ 00:06:04.779 11:24:42 accel.accel_decomp_mcore -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:06:04.779 11:24:42 accel.accel_decomp_mcore -- accel/accel.sh@16 -- # local accel_opc 00:06:04.779 11:24:42 accel.accel_decomp_mcore -- accel/accel.sh@17 -- # local accel_module 00:06:04.779 11:24:42 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:04.779 11:24:42 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:04.779 11:24:42 accel.accel_decomp_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:06:04.779 11:24:42 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:06:04.779 11:24:42 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # build_accel_config 00:06:04.779 11:24:42 accel.accel_decomp_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:04.779 11:24:42 accel.accel_decomp_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:04.779 11:24:42 accel.accel_decomp_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:04.779 11:24:42 accel.accel_decomp_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:04.779 11:24:42 accel.accel_decomp_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:04.779 11:24:42 accel.accel_decomp_mcore -- accel/accel.sh@40 -- # local IFS=, 00:06:04.779 11:24:42 accel.accel_decomp_mcore -- accel/accel.sh@41 -- # jq -r . 00:06:04.779 [2024-07-15 11:24:42.027197] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:06:04.779 [2024-07-15 11:24:42.027279] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64335 ] 00:06:04.779 [2024-07-15 11:24:42.165761] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:04.779 [2024-07-15 11:24:42.242051] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:04.779 [2024-07-15 11:24:42.242208] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:04.779 [2024-07-15 11:24:42.242312] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:04.779 [2024-07-15 11:24:42.242465] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:05.037 11:24:42 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:05.037 11:24:42 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:05.037 11:24:42 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:05.037 11:24:42 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:05.038 11:24:42 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:05.038 11:24:42 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:05.038 11:24:42 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:05.038 11:24:42 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:05.038 11:24:42 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:05.038 11:24:42 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:05.038 11:24:42 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:05.038 11:24:42 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:05.038 11:24:42 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=0xf 00:06:05.038 11:24:42 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:05.038 11:24:42 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:05.038 11:24:42 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:05.038 11:24:42 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:05.038 11:24:42 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:05.038 11:24:42 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:05.038 11:24:42 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:05.038 11:24:42 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:05.038 11:24:42 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:05.038 11:24:42 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:05.038 11:24:42 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:05.038 11:24:42 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=decompress 00:06:05.038 11:24:42 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:05.038 11:24:42 accel.accel_decomp_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:05.038 11:24:42 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:05.038 11:24:42 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:05.038 11:24:42 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:05.038 11:24:42 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:05.038 11:24:42 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:05.038 11:24:42 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:05.038 11:24:42 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:05.038 11:24:42 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:05.038 11:24:42 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:05.038 11:24:42 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:05.038 11:24:42 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=software 00:06:05.038 11:24:42 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:05.038 11:24:42 accel.accel_decomp_mcore -- accel/accel.sh@22 -- # accel_module=software 00:06:05.038 11:24:42 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:05.038 11:24:42 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:05.038 11:24:42 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:05.038 11:24:42 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:05.038 11:24:42 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:05.038 11:24:42 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:05.038 11:24:42 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:06:05.038 11:24:42 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:05.038 11:24:42 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:05.038 11:24:42 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:05.038 11:24:42 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:06:05.038 11:24:42 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:05.038 11:24:42 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:05.038 11:24:42 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:05.038 11:24:42 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=1 00:06:05.038 11:24:42 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:05.038 11:24:42 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:05.038 11:24:42 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:05.038 11:24:42 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:06:05.038 11:24:42 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:05.038 11:24:42 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:05.038 11:24:42 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:05.038 11:24:42 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=Yes 00:06:05.038 11:24:42 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:05.038 11:24:42 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:05.038 11:24:42 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:05.038 11:24:42 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:05.038 11:24:42 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:05.038 11:24:42 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:05.038 11:24:42 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:05.038 11:24:42 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:05.038 11:24:42 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:05.038 11:24:42 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:05.038 11:24:42 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:05.974 11:24:43 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:05.974 11:24:43 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:05.974 11:24:43 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:05.974 11:24:43 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:05.974 11:24:43 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:05.974 11:24:43 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:05.974 11:24:43 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:05.974 11:24:43 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:05.974 11:24:43 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:05.974 11:24:43 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:05.974 11:24:43 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:05.974 11:24:43 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:05.974 11:24:43 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:05.974 11:24:43 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:05.975 11:24:43 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:05.975 11:24:43 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:05.975 11:24:43 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:05.975 11:24:43 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:05.975 11:24:43 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:05.975 11:24:43 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:05.975 11:24:43 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:05.975 11:24:43 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:05.975 11:24:43 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:05.975 11:24:43 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:05.975 11:24:43 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:05.975 11:24:43 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:05.975 11:24:43 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:05.975 11:24:43 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:05.975 11:24:43 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:05.975 ************************************ 00:06:05.975 END TEST accel_decomp_mcore 00:06:05.975 ************************************ 00:06:05.975 11:24:43 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:05.975 11:24:43 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:05.975 11:24:43 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:05.975 11:24:43 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:05.975 11:24:43 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:05.975 11:24:43 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:05.975 11:24:43 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:05.975 11:24:43 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:05.975 11:24:43 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:05.975 11:24:43 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:05.975 00:06:05.975 real 0m1.408s 00:06:05.975 user 0m4.435s 00:06:05.975 sys 0m0.093s 00:06:05.975 11:24:43 accel.accel_decomp_mcore -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:05.975 11:24:43 accel.accel_decomp_mcore -- common/autotest_common.sh@10 -- # set +x 00:06:06.234 11:24:43 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:06.234 11:24:43 accel -- accel/accel.sh@120 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:06.234 11:24:43 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:06:06.234 11:24:43 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:06.234 11:24:43 accel -- common/autotest_common.sh@10 -- # set +x 00:06:06.234 ************************************ 00:06:06.234 START TEST accel_decomp_full_mcore 00:06:06.234 ************************************ 00:06:06.234 11:24:43 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:06.234 11:24:43 accel.accel_decomp_full_mcore -- accel/accel.sh@16 -- # local accel_opc 00:06:06.234 11:24:43 accel.accel_decomp_full_mcore -- accel/accel.sh@17 -- # local accel_module 00:06:06.234 11:24:43 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:06.234 11:24:43 accel.accel_decomp_full_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:06.234 11:24:43 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:06.234 11:24:43 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:06.234 11:24:43 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # build_accel_config 00:06:06.234 11:24:43 accel.accel_decomp_full_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:06.234 11:24:43 accel.accel_decomp_full_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:06.234 11:24:43 accel.accel_decomp_full_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:06.234 11:24:43 accel.accel_decomp_full_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:06.234 11:24:43 accel.accel_decomp_full_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:06.234 11:24:43 accel.accel_decomp_full_mcore -- accel/accel.sh@40 -- # local IFS=, 00:06:06.234 11:24:43 accel.accel_decomp_full_mcore -- accel/accel.sh@41 -- # jq -r . 00:06:06.234 [2024-07-15 11:24:43.487786] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:06:06.234 [2024-07-15 11:24:43.487900] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64367 ] 00:06:06.234 [2024-07-15 11:24:43.628452] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:06.234 [2024-07-15 11:24:43.702578] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:06.234 [2024-07-15 11:24:43.702704] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:06.234 [2024-07-15 11:24:43.702881] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:06.234 [2024-07-15 11:24:43.702886] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:06.493 11:24:43 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:06.493 11:24:43 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:06.493 11:24:43 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:06.493 11:24:43 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:06.493 11:24:43 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:06.493 11:24:43 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:06.493 11:24:43 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:06.493 11:24:43 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:06.493 11:24:43 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:06.493 11:24:43 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:06.493 11:24:43 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:06.493 11:24:43 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:06.493 11:24:43 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=0xf 00:06:06.493 11:24:43 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:06.493 11:24:43 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:06.493 11:24:43 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:06.493 11:24:43 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:06.493 11:24:43 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:06.493 11:24:43 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:06.493 11:24:43 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:06.493 11:24:43 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:06.493 11:24:43 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:06.493 11:24:43 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:06.493 11:24:43 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:06.493 11:24:43 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=decompress 00:06:06.493 11:24:43 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:06.493 11:24:43 accel.accel_decomp_full_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:06.493 11:24:43 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:06.493 11:24:43 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:06.493 11:24:43 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='111250 bytes' 00:06:06.493 11:24:43 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:06.493 11:24:43 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:06.493 11:24:43 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:06.493 11:24:43 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:06.493 11:24:43 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:06.493 11:24:43 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:06.493 11:24:43 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:06.493 11:24:43 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=software 00:06:06.493 11:24:43 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:06.493 11:24:43 accel.accel_decomp_full_mcore -- accel/accel.sh@22 -- # accel_module=software 00:06:06.493 11:24:43 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:06.493 11:24:43 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:06.493 11:24:43 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:06.493 11:24:43 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:06.493 11:24:43 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:06.493 11:24:43 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:06.493 11:24:43 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:06:06.493 11:24:43 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:06.493 11:24:43 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:06.493 11:24:43 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:06.493 11:24:43 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:06:06.493 11:24:43 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:06.493 11:24:43 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:06.493 11:24:43 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:06.493 11:24:43 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=1 00:06:06.493 11:24:43 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:06.493 11:24:43 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:06.493 11:24:43 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:06.493 11:24:43 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:06:06.493 11:24:43 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:06.493 11:24:43 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:06.493 11:24:43 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:06.493 11:24:43 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=Yes 00:06:06.493 11:24:43 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:06.493 11:24:43 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:06.493 11:24:43 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:06.493 11:24:43 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:06.493 11:24:43 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:06.493 11:24:43 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:06.493 11:24:43 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:06.493 11:24:43 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:06.493 11:24:43 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:06.493 11:24:43 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:06.493 11:24:43 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:07.429 11:24:44 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:07.429 11:24:44 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:07.429 11:24:44 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:07.429 11:24:44 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:07.429 11:24:44 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:07.429 11:24:44 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:07.429 11:24:44 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:07.429 11:24:44 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:07.429 11:24:44 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:07.429 11:24:44 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:07.429 11:24:44 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:07.429 11:24:44 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:07.429 11:24:44 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:07.429 11:24:44 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:07.429 11:24:44 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:07.429 11:24:44 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:07.429 11:24:44 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:07.429 11:24:44 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:07.429 11:24:44 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:07.429 11:24:44 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:07.429 11:24:44 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:07.429 11:24:44 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:07.429 11:24:44 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:07.429 11:24:44 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:07.429 11:24:44 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:07.429 11:24:44 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:07.429 11:24:44 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:07.429 11:24:44 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:07.429 11:24:44 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:07.429 11:24:44 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:07.429 11:24:44 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:07.429 11:24:44 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:07.429 11:24:44 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:07.429 ************************************ 00:06:07.429 END TEST accel_decomp_full_mcore 00:06:07.429 ************************************ 00:06:07.429 11:24:44 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:07.429 11:24:44 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:07.429 11:24:44 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:07.429 11:24:44 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:07.429 11:24:44 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:07.429 11:24:44 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:07.429 00:06:07.429 real 0m1.425s 00:06:07.429 user 0m4.493s 00:06:07.429 sys 0m0.096s 00:06:07.429 11:24:44 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:07.429 11:24:44 accel.accel_decomp_full_mcore -- common/autotest_common.sh@10 -- # set +x 00:06:07.688 11:24:44 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:07.688 11:24:44 accel -- accel/accel.sh@121 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:06:07.688 11:24:44 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:06:07.688 11:24:44 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:07.688 11:24:44 accel -- common/autotest_common.sh@10 -- # set +x 00:06:07.688 ************************************ 00:06:07.688 START TEST accel_decomp_mthread 00:06:07.688 ************************************ 00:06:07.688 11:24:44 accel.accel_decomp_mthread -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:06:07.688 11:24:44 accel.accel_decomp_mthread -- accel/accel.sh@16 -- # local accel_opc 00:06:07.688 11:24:44 accel.accel_decomp_mthread -- accel/accel.sh@17 -- # local accel_module 00:06:07.688 11:24:44 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:07.688 11:24:44 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:07.688 11:24:44 accel.accel_decomp_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:06:07.688 11:24:44 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:06:07.688 11:24:44 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # build_accel_config 00:06:07.688 11:24:44 accel.accel_decomp_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:07.688 11:24:44 accel.accel_decomp_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:07.688 11:24:44 accel.accel_decomp_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:07.688 11:24:44 accel.accel_decomp_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:07.688 11:24:44 accel.accel_decomp_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:07.688 11:24:44 accel.accel_decomp_mthread -- accel/accel.sh@40 -- # local IFS=, 00:06:07.688 11:24:44 accel.accel_decomp_mthread -- accel/accel.sh@41 -- # jq -r . 00:06:07.688 [2024-07-15 11:24:44.953205] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:06:07.688 [2024-07-15 11:24:44.953287] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64410 ] 00:06:07.688 [2024-07-15 11:24:45.083988] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:07.947 [2024-07-15 11:24:45.166413] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:07.947 11:24:45 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:07.947 11:24:45 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:07.947 11:24:45 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:07.947 11:24:45 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:07.947 11:24:45 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:07.947 11:24:45 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:07.947 11:24:45 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:07.947 11:24:45 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:07.947 11:24:45 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:07.947 11:24:45 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:07.947 11:24:45 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:07.947 11:24:45 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:07.947 11:24:45 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=0x1 00:06:07.947 11:24:45 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:07.947 11:24:45 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:07.947 11:24:45 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:07.947 11:24:45 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:07.947 11:24:45 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:07.947 11:24:45 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:07.947 11:24:45 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:07.947 11:24:45 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:07.947 11:24:45 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:07.947 11:24:45 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:07.947 11:24:45 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:07.947 11:24:45 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=decompress 00:06:07.947 11:24:45 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:07.947 11:24:45 accel.accel_decomp_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:07.947 11:24:45 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:07.947 11:24:45 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:07.947 11:24:45 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:07.947 11:24:45 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:07.947 11:24:45 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:07.947 11:24:45 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:07.947 11:24:45 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:07.947 11:24:45 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:07.947 11:24:45 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:07.947 11:24:45 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:07.947 11:24:45 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=software 00:06:07.947 11:24:45 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:07.947 11:24:45 accel.accel_decomp_mthread -- accel/accel.sh@22 -- # accel_module=software 00:06:07.947 11:24:45 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:07.947 11:24:45 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:07.947 11:24:45 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:07.947 11:24:45 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:07.947 11:24:45 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:07.947 11:24:45 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:07.947 11:24:45 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:06:07.947 11:24:45 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:07.947 11:24:45 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:07.947 11:24:45 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:07.947 11:24:45 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:06:07.947 11:24:45 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:07.947 11:24:45 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:07.947 11:24:45 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:07.947 11:24:45 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=2 00:06:07.947 11:24:45 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:07.947 11:24:45 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:07.947 11:24:45 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:07.947 11:24:45 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:06:07.947 11:24:45 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:07.947 11:24:45 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:07.947 11:24:45 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:07.947 11:24:45 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=Yes 00:06:07.947 11:24:45 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:07.947 11:24:45 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:07.947 11:24:45 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:07.947 11:24:45 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:07.947 11:24:45 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:07.947 11:24:45 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:07.947 11:24:45 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:07.947 11:24:45 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:07.947 11:24:45 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:07.947 11:24:45 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:07.947 11:24:45 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:08.882 11:24:46 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:08.882 11:24:46 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:08.882 11:24:46 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:08.882 11:24:46 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:08.882 11:24:46 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:08.882 11:24:46 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:08.882 11:24:46 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:08.882 11:24:46 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:08.882 11:24:46 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:08.882 11:24:46 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:08.882 11:24:46 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:08.882 11:24:46 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:08.882 11:24:46 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:08.882 11:24:46 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:08.882 11:24:46 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:08.882 11:24:46 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:08.882 11:24:46 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:08.882 11:24:46 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:08.882 11:24:46 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:08.882 11:24:46 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:08.882 11:24:46 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:08.882 11:24:46 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:08.882 11:24:46 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:08.882 11:24:46 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:08.882 11:24:46 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:08.882 11:24:46 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:08.882 11:24:46 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:08.882 11:24:46 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:08.882 11:24:46 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:08.882 11:24:46 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:08.882 11:24:46 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:08.882 00:06:08.882 real 0m1.383s 00:06:08.882 user 0m1.213s 00:06:08.882 sys 0m0.075s 00:06:08.882 11:24:46 accel.accel_decomp_mthread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:08.882 11:24:46 accel.accel_decomp_mthread -- common/autotest_common.sh@10 -- # set +x 00:06:08.882 ************************************ 00:06:08.882 END TEST accel_decomp_mthread 00:06:08.882 ************************************ 00:06:08.882 11:24:46 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:08.882 11:24:46 accel -- accel/accel.sh@122 -- # run_test accel_decomp_full_mthread accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:06:09.140 11:24:46 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:06:09.140 11:24:46 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:09.140 11:24:46 accel -- common/autotest_common.sh@10 -- # set +x 00:06:09.140 ************************************ 00:06:09.140 START TEST accel_decomp_full_mthread 00:06:09.140 ************************************ 00:06:09.140 11:24:46 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:06:09.140 11:24:46 accel.accel_decomp_full_mthread -- accel/accel.sh@16 -- # local accel_opc 00:06:09.140 11:24:46 accel.accel_decomp_full_mthread -- accel/accel.sh@17 -- # local accel_module 00:06:09.140 11:24:46 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:09.140 11:24:46 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:09.140 11:24:46 accel.accel_decomp_full_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:06:09.140 11:24:46 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:06:09.140 11:24:46 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # build_accel_config 00:06:09.140 11:24:46 accel.accel_decomp_full_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:09.140 11:24:46 accel.accel_decomp_full_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:09.140 11:24:46 accel.accel_decomp_full_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:09.140 11:24:46 accel.accel_decomp_full_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:09.141 11:24:46 accel.accel_decomp_full_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:09.141 11:24:46 accel.accel_decomp_full_mthread -- accel/accel.sh@40 -- # local IFS=, 00:06:09.141 11:24:46 accel.accel_decomp_full_mthread -- accel/accel.sh@41 -- # jq -r . 00:06:09.141 [2024-07-15 11:24:46.393468] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:06:09.141 [2024-07-15 11:24:46.393588] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64440 ] 00:06:09.141 [2024-07-15 11:24:46.521807] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:09.141 [2024-07-15 11:24:46.583187] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:09.399 11:24:46 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:09.399 11:24:46 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:09.399 11:24:46 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:09.399 11:24:46 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:09.399 11:24:46 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:09.399 11:24:46 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:09.399 11:24:46 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:09.399 11:24:46 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:09.399 11:24:46 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:09.399 11:24:46 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:09.399 11:24:46 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:09.399 11:24:46 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:09.399 11:24:46 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=0x1 00:06:09.399 11:24:46 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:09.399 11:24:46 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:09.399 11:24:46 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:09.399 11:24:46 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:09.399 11:24:46 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:09.399 11:24:46 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:09.399 11:24:46 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:09.399 11:24:46 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:09.399 11:24:46 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:09.399 11:24:46 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:09.399 11:24:46 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:09.399 11:24:46 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=decompress 00:06:09.399 11:24:46 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:09.399 11:24:46 accel.accel_decomp_full_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:09.399 11:24:46 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:09.399 11:24:46 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:09.399 11:24:46 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='111250 bytes' 00:06:09.399 11:24:46 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:09.399 11:24:46 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:09.399 11:24:46 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:09.399 11:24:46 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:09.399 11:24:46 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:09.399 11:24:46 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:09.399 11:24:46 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:09.399 11:24:46 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=software 00:06:09.399 11:24:46 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:09.399 11:24:46 accel.accel_decomp_full_mthread -- accel/accel.sh@22 -- # accel_module=software 00:06:09.399 11:24:46 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:09.399 11:24:46 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:09.399 11:24:46 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:09.399 11:24:46 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:09.399 11:24:46 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:09.399 11:24:46 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:09.399 11:24:46 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:06:09.399 11:24:46 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:09.399 11:24:46 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:09.399 11:24:46 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:09.399 11:24:46 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:06:09.399 11:24:46 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:09.399 11:24:46 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:09.399 11:24:46 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:09.399 11:24:46 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=2 00:06:09.399 11:24:46 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:09.399 11:24:46 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:09.399 11:24:46 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:09.399 11:24:46 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:06:09.399 11:24:46 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:09.399 11:24:46 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:09.399 11:24:46 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:09.399 11:24:46 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=Yes 00:06:09.399 11:24:46 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:09.399 11:24:46 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:09.399 11:24:46 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:09.400 11:24:46 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:09.400 11:24:46 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:09.400 11:24:46 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:09.400 11:24:46 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:09.400 11:24:46 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:09.400 11:24:46 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:09.400 11:24:46 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:09.400 11:24:46 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:10.359 11:24:47 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:10.359 11:24:47 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:10.359 11:24:47 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:10.359 11:24:47 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:10.359 11:24:47 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:10.359 11:24:47 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:10.359 11:24:47 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:10.359 11:24:47 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:10.359 11:24:47 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:10.359 11:24:47 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:10.359 11:24:47 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:10.359 11:24:47 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:10.359 11:24:47 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:10.359 11:24:47 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:10.359 11:24:47 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:10.359 11:24:47 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:10.359 11:24:47 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:10.359 11:24:47 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:10.359 11:24:47 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:10.359 11:24:47 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:10.359 11:24:47 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:10.359 11:24:47 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:10.359 11:24:47 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:10.359 11:24:47 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:10.359 11:24:47 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:10.359 11:24:47 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:10.359 11:24:47 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:10.359 11:24:47 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:10.359 11:24:47 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:10.359 11:24:47 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:10.359 11:24:47 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:10.359 00:06:10.359 real 0m1.414s 00:06:10.359 user 0m1.246s 00:06:10.359 sys 0m0.068s 00:06:10.359 11:24:47 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:10.359 11:24:47 accel.accel_decomp_full_mthread -- common/autotest_common.sh@10 -- # set +x 00:06:10.359 ************************************ 00:06:10.359 END TEST accel_decomp_full_mthread 00:06:10.359 ************************************ 00:06:10.359 11:24:47 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:10.359 11:24:47 accel -- accel/accel.sh@124 -- # [[ n == y ]] 00:06:10.359 11:24:47 accel -- accel/accel.sh@137 -- # run_test accel_dif_functional_tests /home/vagrant/spdk_repo/spdk/test/accel/dif/dif -c /dev/fd/62 00:06:10.359 11:24:47 accel -- accel/accel.sh@137 -- # build_accel_config 00:06:10.359 11:24:47 accel -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:06:10.359 11:24:47 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:10.359 11:24:47 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:10.359 11:24:47 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:10.359 11:24:47 accel -- common/autotest_common.sh@10 -- # set +x 00:06:10.359 11:24:47 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:10.359 11:24:47 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:10.359 11:24:47 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:10.359 11:24:47 accel -- accel/accel.sh@40 -- # local IFS=, 00:06:10.359 11:24:47 accel -- accel/accel.sh@41 -- # jq -r . 00:06:10.617 ************************************ 00:06:10.617 START TEST accel_dif_functional_tests 00:06:10.617 ************************************ 00:06:10.617 11:24:47 accel.accel_dif_functional_tests -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/accel/dif/dif -c /dev/fd/62 00:06:10.617 [2024-07-15 11:24:47.886843] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:06:10.617 [2024-07-15 11:24:47.886945] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64476 ] 00:06:10.617 [2024-07-15 11:24:48.020657] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:10.617 [2024-07-15 11:24:48.083484] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:10.617 [2024-07-15 11:24:48.083623] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:10.617 [2024-07-15 11:24:48.083629] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:10.875 00:06:10.875 00:06:10.875 CUnit - A unit testing framework for C - Version 2.1-3 00:06:10.875 http://cunit.sourceforge.net/ 00:06:10.875 00:06:10.875 00:06:10.875 Suite: accel_dif 00:06:10.875 Test: verify: DIF generated, GUARD check ...passed 00:06:10.875 Test: verify: DIF generated, APPTAG check ...passed 00:06:10.875 Test: verify: DIF generated, REFTAG check ...passed 00:06:10.875 Test: verify: DIF not generated, GUARD check ...passed 00:06:10.875 Test: verify: DIF not generated, APPTAG check ...[2024-07-15 11:24:48.136694] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:06:10.875 [2024-07-15 11:24:48.136808] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:06:10.875 passed 00:06:10.875 Test: verify: DIF not generated, REFTAG check ...passed 00:06:10.875 Test: verify: APPTAG correct, APPTAG check ...passed 00:06:10.875 Test: verify: APPTAG incorrect, APPTAG check ...[2024-07-15 11:24:48.136860] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:06:10.875 passed 00:06:10.875 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:06:10.875 Test: verify: REFTAG incorrect, REFTAG ignore ...passed[2024-07-15 11:24:48.137037] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:06:10.875 00:06:10.875 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:06:10.875 Test: verify: REFTAG_INIT incorrect, REFTAG check ...passed 00:06:10.875 Test: verify copy: DIF generated, GUARD check ...[2024-07-15 11:24:48.137402] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:06:10.875 passed 00:06:10.875 Test: verify copy: DIF generated, APPTAG check ...passed 00:06:10.875 Test: verify copy: DIF generated, REFTAG check ...passed 00:06:10.875 Test: verify copy: DIF not generated, GUARD check ...[2024-07-15 11:24:48.137911] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:06:10.875 passed 00:06:10.875 Test: verify copy: DIF not generated, APPTAG check ...[2024-07-15 11:24:48.138110] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:06:10.875 passed 00:06:10.875 Test: verify copy: DIF not generated, REFTAG check ...passed 00:06:10.875 Test: generate copy: DIF generated, GUARD check ...[2024-07-15 11:24:48.138330] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:06:10.875 passed 00:06:10.875 Test: generate copy: DIF generated, APTTAG check ...passed 00:06:10.875 Test: generate copy: DIF generated, REFTAG check ...passed 00:06:10.875 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:06:10.875 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:06:10.875 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:06:10.876 Test: generate copy: iovecs-len validate ...passed 00:06:10.876 Test: generate copy: buffer alignment validate ...[2024-07-15 11:24:48.138861] dif.c:1190:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:06:10.876 passed 00:06:10.876 00:06:10.876 Run Summary: Type Total Ran Passed Failed Inactive 00:06:10.876 suites 1 1 n/a 0 0 00:06:10.876 tests 26 26 26 0 0 00:06:10.876 asserts 115 115 115 0 n/a 00:06:10.876 00:06:10.876 Elapsed time = 0.007 seconds 00:06:10.876 ************************************ 00:06:10.876 END TEST accel_dif_functional_tests 00:06:10.876 ************************************ 00:06:10.876 00:06:10.876 real 0m0.458s 00:06:10.876 user 0m0.530s 00:06:10.876 sys 0m0.103s 00:06:10.876 11:24:48 accel.accel_dif_functional_tests -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:10.876 11:24:48 accel.accel_dif_functional_tests -- common/autotest_common.sh@10 -- # set +x 00:06:10.876 11:24:48 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:10.876 00:06:10.876 real 0m31.917s 00:06:10.876 user 0m34.161s 00:06:10.876 sys 0m2.900s 00:06:10.876 ************************************ 00:06:10.876 END TEST accel 00:06:10.876 ************************************ 00:06:10.876 11:24:48 accel -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:10.876 11:24:48 accel -- common/autotest_common.sh@10 -- # set +x 00:06:11.134 11:24:48 -- common/autotest_common.sh@1142 -- # return 0 00:06:11.134 11:24:48 -- spdk/autotest.sh@184 -- # run_test accel_rpc /home/vagrant/spdk_repo/spdk/test/accel/accel_rpc.sh 00:06:11.134 11:24:48 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:11.134 11:24:48 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:11.134 11:24:48 -- common/autotest_common.sh@10 -- # set +x 00:06:11.134 ************************************ 00:06:11.134 START TEST accel_rpc 00:06:11.134 ************************************ 00:06:11.134 11:24:48 accel_rpc -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/accel/accel_rpc.sh 00:06:11.134 * Looking for test storage... 00:06:11.134 * Found test storage at /home/vagrant/spdk_repo/spdk/test/accel 00:06:11.134 11:24:48 accel_rpc -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:11.134 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:11.134 11:24:48 accel_rpc -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=64540 00:06:11.134 11:24:48 accel_rpc -- accel/accel_rpc.sh@13 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --wait-for-rpc 00:06:11.134 11:24:48 accel_rpc -- accel/accel_rpc.sh@15 -- # waitforlisten 64540 00:06:11.134 11:24:48 accel_rpc -- common/autotest_common.sh@829 -- # '[' -z 64540 ']' 00:06:11.134 11:24:48 accel_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:11.134 11:24:48 accel_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:11.134 11:24:48 accel_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:11.134 11:24:48 accel_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:11.134 11:24:48 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:11.134 [2024-07-15 11:24:48.558194] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:06:11.134 [2024-07-15 11:24:48.558322] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64540 ] 00:06:11.391 [2024-07-15 11:24:48.703153] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:11.391 [2024-07-15 11:24:48.762992] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:12.324 11:24:49 accel_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:12.324 11:24:49 accel_rpc -- common/autotest_common.sh@862 -- # return 0 00:06:12.324 11:24:49 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:06:12.324 11:24:49 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:06:12.324 11:24:49 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:06:12.324 11:24:49 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:06:12.324 11:24:49 accel_rpc -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:06:12.324 11:24:49 accel_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:12.324 11:24:49 accel_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:12.324 11:24:49 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:12.324 ************************************ 00:06:12.324 START TEST accel_assign_opcode 00:06:12.324 ************************************ 00:06:12.324 11:24:49 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1123 -- # accel_assign_opcode_test_suite 00:06:12.324 11:24:49 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:06:12.324 11:24:49 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:12.324 11:24:49 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:06:12.324 [2024-07-15 11:24:49.591511] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:06:12.324 11:24:49 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:12.324 11:24:49 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:06:12.324 11:24:49 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:12.324 11:24:49 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:06:12.324 [2024-07-15 11:24:49.599500] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:06:12.324 11:24:49 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:12.324 11:24:49 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:06:12.324 11:24:49 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:12.324 11:24:49 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:06:12.324 11:24:49 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:12.324 11:24:49 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:06:12.324 11:24:49 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:12.324 11:24:49 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:06:12.324 11:24:49 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:06:12.324 11:24:49 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # grep software 00:06:12.324 11:24:49 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:12.324 software 00:06:12.324 00:06:12.324 real 0m0.213s 00:06:12.324 user 0m0.062s 00:06:12.324 sys 0m0.005s 00:06:12.324 ************************************ 00:06:12.324 END TEST accel_assign_opcode 00:06:12.324 ************************************ 00:06:12.324 11:24:49 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:12.324 11:24:49 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:06:12.582 11:24:49 accel_rpc -- common/autotest_common.sh@1142 -- # return 0 00:06:12.582 11:24:49 accel_rpc -- accel/accel_rpc.sh@55 -- # killprocess 64540 00:06:12.582 11:24:49 accel_rpc -- common/autotest_common.sh@948 -- # '[' -z 64540 ']' 00:06:12.582 11:24:49 accel_rpc -- common/autotest_common.sh@952 -- # kill -0 64540 00:06:12.582 11:24:49 accel_rpc -- common/autotest_common.sh@953 -- # uname 00:06:12.582 11:24:49 accel_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:12.582 11:24:49 accel_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 64540 00:06:12.582 killing process with pid 64540 00:06:12.582 11:24:49 accel_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:12.582 11:24:49 accel_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:12.582 11:24:49 accel_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 64540' 00:06:12.582 11:24:49 accel_rpc -- common/autotest_common.sh@967 -- # kill 64540 00:06:12.582 11:24:49 accel_rpc -- common/autotest_common.sh@972 -- # wait 64540 00:06:12.840 00:06:12.840 real 0m1.727s 00:06:12.840 user 0m1.991s 00:06:12.840 sys 0m0.346s 00:06:12.840 11:24:50 accel_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:12.840 ************************************ 00:06:12.840 END TEST accel_rpc 00:06:12.840 ************************************ 00:06:12.840 11:24:50 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:12.840 11:24:50 -- common/autotest_common.sh@1142 -- # return 0 00:06:12.840 11:24:50 -- spdk/autotest.sh@185 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:06:12.841 11:24:50 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:12.841 11:24:50 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:12.841 11:24:50 -- common/autotest_common.sh@10 -- # set +x 00:06:12.841 ************************************ 00:06:12.841 START TEST app_cmdline 00:06:12.841 ************************************ 00:06:12.841 11:24:50 app_cmdline -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:06:12.841 * Looking for test storage... 00:06:12.841 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:06:12.841 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:12.841 11:24:50 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:06:12.841 11:24:50 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=64651 00:06:12.841 11:24:50 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:06:12.841 11:24:50 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 64651 00:06:12.841 11:24:50 app_cmdline -- common/autotest_common.sh@829 -- # '[' -z 64651 ']' 00:06:12.841 11:24:50 app_cmdline -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:12.841 11:24:50 app_cmdline -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:12.841 11:24:50 app_cmdline -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:12.841 11:24:50 app_cmdline -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:12.841 11:24:50 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:13.102 [2024-07-15 11:24:50.320428] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:06:13.102 [2024-07-15 11:24:50.320834] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64651 ] 00:06:13.102 [2024-07-15 11:24:50.467057] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:13.102 [2024-07-15 11:24:50.525862] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:14.035 11:24:51 app_cmdline -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:14.035 11:24:51 app_cmdline -- common/autotest_common.sh@862 -- # return 0 00:06:14.035 11:24:51 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:06:14.293 { 00:06:14.293 "fields": { 00:06:14.293 "commit": "e7cce062d", 00:06:14.293 "major": 24, 00:06:14.293 "minor": 9, 00:06:14.293 "patch": 0, 00:06:14.293 "suffix": "-pre" 00:06:14.293 }, 00:06:14.293 "version": "SPDK v24.09-pre git sha1 e7cce062d" 00:06:14.293 } 00:06:14.293 11:24:51 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:06:14.293 11:24:51 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:06:14.293 11:24:51 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:06:14.293 11:24:51 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:06:14.293 11:24:51 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:06:14.293 11:24:51 app_cmdline -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:14.293 11:24:51 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:06:14.293 11:24:51 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:14.293 11:24:51 app_cmdline -- app/cmdline.sh@26 -- # sort 00:06:14.293 11:24:51 app_cmdline -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:14.293 11:24:51 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:06:14.293 11:24:51 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:06:14.293 11:24:51 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:14.293 11:24:51 app_cmdline -- common/autotest_common.sh@648 -- # local es=0 00:06:14.550 11:24:51 app_cmdline -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:14.550 11:24:51 app_cmdline -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:14.550 11:24:51 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:14.550 11:24:51 app_cmdline -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:14.550 11:24:51 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:14.550 11:24:51 app_cmdline -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:14.550 11:24:51 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:14.550 11:24:51 app_cmdline -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:14.550 11:24:51 app_cmdline -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:06:14.550 11:24:51 app_cmdline -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:14.550 2024/07/15 11:24:52 error on JSON-RPC call, method: env_dpdk_get_mem_stats, params: map[], err: error received for env_dpdk_get_mem_stats method, err: Code=-32601 Msg=Method not found 00:06:14.550 request: 00:06:14.550 { 00:06:14.550 "method": "env_dpdk_get_mem_stats", 00:06:14.550 "params": {} 00:06:14.550 } 00:06:14.550 Got JSON-RPC error response 00:06:14.550 GoRPCClient: error on JSON-RPC call 00:06:14.807 11:24:52 app_cmdline -- common/autotest_common.sh@651 -- # es=1 00:06:14.807 11:24:52 app_cmdline -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:14.807 11:24:52 app_cmdline -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:14.807 11:24:52 app_cmdline -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:14.807 11:24:52 app_cmdline -- app/cmdline.sh@1 -- # killprocess 64651 00:06:14.807 11:24:52 app_cmdline -- common/autotest_common.sh@948 -- # '[' -z 64651 ']' 00:06:14.807 11:24:52 app_cmdline -- common/autotest_common.sh@952 -- # kill -0 64651 00:06:14.807 11:24:52 app_cmdline -- common/autotest_common.sh@953 -- # uname 00:06:14.807 11:24:52 app_cmdline -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:14.807 11:24:52 app_cmdline -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 64651 00:06:14.807 11:24:52 app_cmdline -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:14.807 11:24:52 app_cmdline -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:14.807 killing process with pid 64651 00:06:14.807 11:24:52 app_cmdline -- common/autotest_common.sh@966 -- # echo 'killing process with pid 64651' 00:06:14.807 11:24:52 app_cmdline -- common/autotest_common.sh@967 -- # kill 64651 00:06:14.807 11:24:52 app_cmdline -- common/autotest_common.sh@972 -- # wait 64651 00:06:15.066 00:06:15.066 real 0m2.161s 00:06:15.066 user 0m2.970s 00:06:15.066 sys 0m0.382s 00:06:15.066 11:24:52 app_cmdline -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:15.066 11:24:52 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:15.066 ************************************ 00:06:15.066 END TEST app_cmdline 00:06:15.066 ************************************ 00:06:15.066 11:24:52 -- common/autotest_common.sh@1142 -- # return 0 00:06:15.066 11:24:52 -- spdk/autotest.sh@186 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:06:15.066 11:24:52 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:15.066 11:24:52 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:15.066 11:24:52 -- common/autotest_common.sh@10 -- # set +x 00:06:15.066 ************************************ 00:06:15.066 START TEST version 00:06:15.066 ************************************ 00:06:15.066 11:24:52 version -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:06:15.066 * Looking for test storage... 00:06:15.066 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:06:15.066 11:24:52 version -- app/version.sh@17 -- # get_header_version major 00:06:15.066 11:24:52 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:15.066 11:24:52 version -- app/version.sh@14 -- # cut -f2 00:06:15.066 11:24:52 version -- app/version.sh@14 -- # tr -d '"' 00:06:15.066 11:24:52 version -- app/version.sh@17 -- # major=24 00:06:15.066 11:24:52 version -- app/version.sh@18 -- # get_header_version minor 00:06:15.066 11:24:52 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:15.066 11:24:52 version -- app/version.sh@14 -- # cut -f2 00:06:15.066 11:24:52 version -- app/version.sh@14 -- # tr -d '"' 00:06:15.066 11:24:52 version -- app/version.sh@18 -- # minor=9 00:06:15.066 11:24:52 version -- app/version.sh@19 -- # get_header_version patch 00:06:15.066 11:24:52 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:15.066 11:24:52 version -- app/version.sh@14 -- # cut -f2 00:06:15.066 11:24:52 version -- app/version.sh@14 -- # tr -d '"' 00:06:15.066 11:24:52 version -- app/version.sh@19 -- # patch=0 00:06:15.066 11:24:52 version -- app/version.sh@20 -- # get_header_version suffix 00:06:15.066 11:24:52 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:15.066 11:24:52 version -- app/version.sh@14 -- # cut -f2 00:06:15.066 11:24:52 version -- app/version.sh@14 -- # tr -d '"' 00:06:15.066 11:24:52 version -- app/version.sh@20 -- # suffix=-pre 00:06:15.066 11:24:52 version -- app/version.sh@22 -- # version=24.9 00:06:15.066 11:24:52 version -- app/version.sh@25 -- # (( patch != 0 )) 00:06:15.066 11:24:52 version -- app/version.sh@28 -- # version=24.9rc0 00:06:15.066 11:24:52 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:06:15.066 11:24:52 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:06:15.066 11:24:52 version -- app/version.sh@30 -- # py_version=24.9rc0 00:06:15.066 11:24:52 version -- app/version.sh@31 -- # [[ 24.9rc0 == \2\4\.\9\r\c\0 ]] 00:06:15.066 00:06:15.066 real 0m0.147s 00:06:15.066 user 0m0.084s 00:06:15.066 sys 0m0.090s 00:06:15.066 11:24:52 version -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:15.066 11:24:52 version -- common/autotest_common.sh@10 -- # set +x 00:06:15.066 ************************************ 00:06:15.066 END TEST version 00:06:15.066 ************************************ 00:06:15.325 11:24:52 -- common/autotest_common.sh@1142 -- # return 0 00:06:15.325 11:24:52 -- spdk/autotest.sh@188 -- # '[' 0 -eq 1 ']' 00:06:15.325 11:24:52 -- spdk/autotest.sh@198 -- # uname -s 00:06:15.325 11:24:52 -- spdk/autotest.sh@198 -- # [[ Linux == Linux ]] 00:06:15.325 11:24:52 -- spdk/autotest.sh@199 -- # [[ 0 -eq 1 ]] 00:06:15.325 11:24:52 -- spdk/autotest.sh@199 -- # [[ 0 -eq 1 ]] 00:06:15.325 11:24:52 -- spdk/autotest.sh@211 -- # '[' 0 -eq 1 ']' 00:06:15.325 11:24:52 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:06:15.325 11:24:52 -- spdk/autotest.sh@260 -- # timing_exit lib 00:06:15.325 11:24:52 -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:15.325 11:24:52 -- common/autotest_common.sh@10 -- # set +x 00:06:15.325 11:24:52 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:06:15.325 11:24:52 -- spdk/autotest.sh@270 -- # '[' 0 -eq 1 ']' 00:06:15.325 11:24:52 -- spdk/autotest.sh@279 -- # '[' 1 -eq 1 ']' 00:06:15.325 11:24:52 -- spdk/autotest.sh@280 -- # export NET_TYPE 00:06:15.325 11:24:52 -- spdk/autotest.sh@283 -- # '[' tcp = rdma ']' 00:06:15.325 11:24:52 -- spdk/autotest.sh@286 -- # '[' tcp = tcp ']' 00:06:15.325 11:24:52 -- spdk/autotest.sh@287 -- # run_test nvmf_tcp /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf.sh --transport=tcp 00:06:15.325 11:24:52 -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:06:15.325 11:24:52 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:15.325 11:24:52 -- common/autotest_common.sh@10 -- # set +x 00:06:15.325 ************************************ 00:06:15.325 START TEST nvmf_tcp 00:06:15.325 ************************************ 00:06:15.325 11:24:52 nvmf_tcp -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf.sh --transport=tcp 00:06:15.325 * Looking for test storage... 00:06:15.325 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:06:15.325 11:24:52 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:06:15.325 11:24:52 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:06:15.325 11:24:52 nvmf_tcp -- nvmf/nvmf.sh@14 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:06:15.325 11:24:52 nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:06:15.325 11:24:52 nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:15.325 11:24:52 nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:15.325 11:24:52 nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:15.325 11:24:52 nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:15.325 11:24:52 nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:15.325 11:24:52 nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:15.325 11:24:52 nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:15.325 11:24:52 nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:15.325 11:24:52 nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:15.325 11:24:52 nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:15.325 11:24:52 nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:891080d4-f96c-4735-b9e2-e3ce9892e421 00:06:15.325 11:24:52 nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=891080d4-f96c-4735-b9e2-e3ce9892e421 00:06:15.325 11:24:52 nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:15.325 11:24:52 nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:15.325 11:24:52 nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:06:15.325 11:24:52 nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:15.325 11:24:52 nvmf_tcp -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:15.325 11:24:52 nvmf_tcp -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:15.325 11:24:52 nvmf_tcp -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:15.325 11:24:52 nvmf_tcp -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:15.325 11:24:52 nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:15.325 11:24:52 nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:15.325 11:24:52 nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:15.325 11:24:52 nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:06:15.325 11:24:52 nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:15.325 11:24:52 nvmf_tcp -- nvmf/common.sh@47 -- # : 0 00:06:15.325 11:24:52 nvmf_tcp -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:15.325 11:24:52 nvmf_tcp -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:15.325 11:24:52 nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:15.325 11:24:52 nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:15.325 11:24:52 nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:15.325 11:24:52 nvmf_tcp -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:15.325 11:24:52 nvmf_tcp -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:15.325 11:24:52 nvmf_tcp -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:15.325 11:24:52 nvmf_tcp -- nvmf/nvmf.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:06:15.325 11:24:52 nvmf_tcp -- nvmf/nvmf.sh@18 -- # TEST_ARGS=("$@") 00:06:15.325 11:24:52 nvmf_tcp -- nvmf/nvmf.sh@20 -- # timing_enter target 00:06:15.325 11:24:52 nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:15.325 11:24:52 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:15.325 11:24:52 nvmf_tcp -- nvmf/nvmf.sh@22 -- # [[ 0 -eq 0 ]] 00:06:15.325 11:24:52 nvmf_tcp -- nvmf/nvmf.sh@23 -- # run_test nvmf_example /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:06:15.325 11:24:52 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:06:15.325 11:24:52 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:15.325 11:24:52 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:15.325 ************************************ 00:06:15.325 START TEST nvmf_example 00:06:15.325 ************************************ 00:06:15.325 11:24:52 nvmf_tcp.nvmf_example -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:06:15.325 * Looking for test storage... 00:06:15.325 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:06:15.325 11:24:52 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:06:15.325 11:24:52 nvmf_tcp.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:06:15.325 11:24:52 nvmf_tcp.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:15.325 11:24:52 nvmf_tcp.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:15.325 11:24:52 nvmf_tcp.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:15.325 11:24:52 nvmf_tcp.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:15.325 11:24:52 nvmf_tcp.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:15.325 11:24:52 nvmf_tcp.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:15.325 11:24:52 nvmf_tcp.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:15.325 11:24:52 nvmf_tcp.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:15.325 11:24:52 nvmf_tcp.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:15.325 11:24:52 nvmf_tcp.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:15.325 11:24:52 nvmf_tcp.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:891080d4-f96c-4735-b9e2-e3ce9892e421 00:06:15.325 11:24:52 nvmf_tcp.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=891080d4-f96c-4735-b9e2-e3ce9892e421 00:06:15.326 11:24:52 nvmf_tcp.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:15.326 11:24:52 nvmf_tcp.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:15.326 11:24:52 nvmf_tcp.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:06:15.326 11:24:52 nvmf_tcp.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:15.326 11:24:52 nvmf_tcp.nvmf_example -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:15.326 11:24:52 nvmf_tcp.nvmf_example -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:15.326 11:24:52 nvmf_tcp.nvmf_example -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:15.326 11:24:52 nvmf_tcp.nvmf_example -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:15.326 11:24:52 nvmf_tcp.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:15.326 11:24:52 nvmf_tcp.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:15.326 11:24:52 nvmf_tcp.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:15.326 11:24:52 nvmf_tcp.nvmf_example -- paths/export.sh@5 -- # export PATH 00:06:15.326 11:24:52 nvmf_tcp.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:15.326 11:24:52 nvmf_tcp.nvmf_example -- nvmf/common.sh@47 -- # : 0 00:06:15.326 11:24:52 nvmf_tcp.nvmf_example -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:15.326 11:24:52 nvmf_tcp.nvmf_example -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:15.326 11:24:52 nvmf_tcp.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:15.326 11:24:52 nvmf_tcp.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:15.326 11:24:52 nvmf_tcp.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:15.326 11:24:52 nvmf_tcp.nvmf_example -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:15.326 11:24:52 nvmf_tcp.nvmf_example -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:15.326 11:24:52 nvmf_tcp.nvmf_example -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:15.585 11:24:52 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:06:15.585 11:24:52 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:06:15.585 11:24:52 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:06:15.585 11:24:52 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:06:15.585 11:24:52 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:06:15.585 11:24:52 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:06:15.585 11:24:52 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:06:15.585 11:24:52 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:06:15.585 11:24:52 nvmf_tcp.nvmf_example -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:15.585 11:24:52 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:15.585 11:24:52 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:06:15.585 11:24:52 nvmf_tcp.nvmf_example -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:06:15.585 11:24:52 nvmf_tcp.nvmf_example -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:15.585 11:24:52 nvmf_tcp.nvmf_example -- nvmf/common.sh@448 -- # prepare_net_devs 00:06:15.585 11:24:52 nvmf_tcp.nvmf_example -- nvmf/common.sh@410 -- # local -g is_hw=no 00:06:15.585 11:24:52 nvmf_tcp.nvmf_example -- nvmf/common.sh@412 -- # remove_spdk_ns 00:06:15.585 11:24:52 nvmf_tcp.nvmf_example -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:15.585 11:24:52 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:15.585 11:24:52 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:15.585 11:24:52 nvmf_tcp.nvmf_example -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:06:15.585 11:24:52 nvmf_tcp.nvmf_example -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:06:15.585 11:24:52 nvmf_tcp.nvmf_example -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:06:15.585 11:24:52 nvmf_tcp.nvmf_example -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:06:15.585 11:24:52 nvmf_tcp.nvmf_example -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:06:15.585 11:24:52 nvmf_tcp.nvmf_example -- nvmf/common.sh@432 -- # nvmf_veth_init 00:06:15.585 11:24:52 nvmf_tcp.nvmf_example -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:15.585 11:24:52 nvmf_tcp.nvmf_example -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:15.585 11:24:52 nvmf_tcp.nvmf_example -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:06:15.585 11:24:52 nvmf_tcp.nvmf_example -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:06:15.585 11:24:52 nvmf_tcp.nvmf_example -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:06:15.585 11:24:52 nvmf_tcp.nvmf_example -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:06:15.585 11:24:52 nvmf_tcp.nvmf_example -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:06:15.585 11:24:52 nvmf_tcp.nvmf_example -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:15.585 11:24:52 nvmf_tcp.nvmf_example -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:06:15.585 11:24:52 nvmf_tcp.nvmf_example -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:06:15.585 11:24:52 nvmf_tcp.nvmf_example -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:06:15.585 11:24:52 nvmf_tcp.nvmf_example -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:06:15.585 11:24:52 nvmf_tcp.nvmf_example -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:06:15.585 Cannot find device "nvmf_init_br" 00:06:15.585 11:24:52 nvmf_tcp.nvmf_example -- nvmf/common.sh@154 -- # true 00:06:15.585 11:24:52 nvmf_tcp.nvmf_example -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:06:15.585 Cannot find device "nvmf_tgt_br" 00:06:15.585 11:24:52 nvmf_tcp.nvmf_example -- nvmf/common.sh@155 -- # true 00:06:15.585 11:24:52 nvmf_tcp.nvmf_example -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:06:15.585 Cannot find device "nvmf_tgt_br2" 00:06:15.585 11:24:52 nvmf_tcp.nvmf_example -- nvmf/common.sh@156 -- # true 00:06:15.585 11:24:52 nvmf_tcp.nvmf_example -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:06:15.585 Cannot find device "nvmf_init_br" 00:06:15.585 11:24:52 nvmf_tcp.nvmf_example -- nvmf/common.sh@157 -- # true 00:06:15.585 11:24:52 nvmf_tcp.nvmf_example -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:06:15.585 Cannot find device "nvmf_tgt_br" 00:06:15.585 11:24:52 nvmf_tcp.nvmf_example -- nvmf/common.sh@158 -- # true 00:06:15.585 11:24:52 nvmf_tcp.nvmf_example -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:06:15.585 Cannot find device "nvmf_tgt_br2" 00:06:15.585 11:24:52 nvmf_tcp.nvmf_example -- nvmf/common.sh@159 -- # true 00:06:15.585 11:24:52 nvmf_tcp.nvmf_example -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:06:15.585 Cannot find device "nvmf_br" 00:06:15.585 11:24:52 nvmf_tcp.nvmf_example -- nvmf/common.sh@160 -- # true 00:06:15.585 11:24:52 nvmf_tcp.nvmf_example -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:06:15.585 Cannot find device "nvmf_init_if" 00:06:15.585 11:24:52 nvmf_tcp.nvmf_example -- nvmf/common.sh@161 -- # true 00:06:15.585 11:24:52 nvmf_tcp.nvmf_example -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:06:15.585 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:06:15.585 11:24:52 nvmf_tcp.nvmf_example -- nvmf/common.sh@162 -- # true 00:06:15.585 11:24:52 nvmf_tcp.nvmf_example -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:06:15.585 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:06:15.585 11:24:52 nvmf_tcp.nvmf_example -- nvmf/common.sh@163 -- # true 00:06:15.585 11:24:52 nvmf_tcp.nvmf_example -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:06:15.585 11:24:52 nvmf_tcp.nvmf_example -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:06:15.585 11:24:52 nvmf_tcp.nvmf_example -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:06:15.585 11:24:52 nvmf_tcp.nvmf_example -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:06:15.585 11:24:52 nvmf_tcp.nvmf_example -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:06:15.585 11:24:52 nvmf_tcp.nvmf_example -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:06:15.585 11:24:52 nvmf_tcp.nvmf_example -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:06:15.585 11:24:52 nvmf_tcp.nvmf_example -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:06:15.585 11:24:52 nvmf_tcp.nvmf_example -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:06:15.585 11:24:53 nvmf_tcp.nvmf_example -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:06:15.585 11:24:53 nvmf_tcp.nvmf_example -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:06:15.585 11:24:53 nvmf_tcp.nvmf_example -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:06:15.585 11:24:53 nvmf_tcp.nvmf_example -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:06:15.585 11:24:53 nvmf_tcp.nvmf_example -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:06:15.585 11:24:53 nvmf_tcp.nvmf_example -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:06:15.585 11:24:53 nvmf_tcp.nvmf_example -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:06:15.585 11:24:53 nvmf_tcp.nvmf_example -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:06:15.844 11:24:53 nvmf_tcp.nvmf_example -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:06:15.844 11:24:53 nvmf_tcp.nvmf_example -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:06:15.844 11:24:53 nvmf_tcp.nvmf_example -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:06:15.844 11:24:53 nvmf_tcp.nvmf_example -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:06:15.844 11:24:53 nvmf_tcp.nvmf_example -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:06:15.844 11:24:53 nvmf_tcp.nvmf_example -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:06:15.844 11:24:53 nvmf_tcp.nvmf_example -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:06:15.844 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:15.844 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.081 ms 00:06:15.844 00:06:15.844 --- 10.0.0.2 ping statistics --- 00:06:15.844 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:15.844 rtt min/avg/max/mdev = 0.081/0.081/0.081/0.000 ms 00:06:15.844 11:24:53 nvmf_tcp.nvmf_example -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:06:15.844 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:06:15.844 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.042 ms 00:06:15.844 00:06:15.844 --- 10.0.0.3 ping statistics --- 00:06:15.844 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:15.844 rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms 00:06:15.844 11:24:53 nvmf_tcp.nvmf_example -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:06:15.844 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:15.844 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.019 ms 00:06:15.844 00:06:15.844 --- 10.0.0.1 ping statistics --- 00:06:15.844 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:15.844 rtt min/avg/max/mdev = 0.019/0.019/0.019/0.000 ms 00:06:15.844 11:24:53 nvmf_tcp.nvmf_example -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:15.844 11:24:53 nvmf_tcp.nvmf_example -- nvmf/common.sh@433 -- # return 0 00:06:15.844 11:24:53 nvmf_tcp.nvmf_example -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:06:15.844 11:24:53 nvmf_tcp.nvmf_example -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:15.844 11:24:53 nvmf_tcp.nvmf_example -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:06:15.844 11:24:53 nvmf_tcp.nvmf_example -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:06:15.844 11:24:53 nvmf_tcp.nvmf_example -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:15.844 11:24:53 nvmf_tcp.nvmf_example -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:06:15.844 11:24:53 nvmf_tcp.nvmf_example -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:06:15.844 11:24:53 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:06:15.844 11:24:53 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:06:15.844 11:24:53 nvmf_tcp.nvmf_example -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:15.844 11:24:53 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:15.844 11:24:53 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:06:15.844 11:24:53 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:06:15.844 11:24:53 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=65006 00:06:15.844 11:24:53 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:06:15.844 11:24:53 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 65006 00:06:15.844 11:24:53 nvmf_tcp.nvmf_example -- common/autotest_common.sh@829 -- # '[' -z 65006 ']' 00:06:15.844 11:24:53 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@33 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:06:15.844 11:24:53 nvmf_tcp.nvmf_example -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:15.845 11:24:53 nvmf_tcp.nvmf_example -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:15.845 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:15.845 11:24:53 nvmf_tcp.nvmf_example -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:15.845 11:24:53 nvmf_tcp.nvmf_example -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:15.845 11:24:53 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:17.218 11:24:54 nvmf_tcp.nvmf_example -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:17.218 11:24:54 nvmf_tcp.nvmf_example -- common/autotest_common.sh@862 -- # return 0 00:06:17.218 11:24:54 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:06:17.218 11:24:54 nvmf_tcp.nvmf_example -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:17.218 11:24:54 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:17.218 11:24:54 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:06:17.218 11:24:54 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:17.218 11:24:54 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:17.218 11:24:54 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:17.218 11:24:54 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:06:17.218 11:24:54 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:17.218 11:24:54 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:17.218 11:24:54 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:17.218 11:24:54 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:06:17.218 11:24:54 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:06:17.218 11:24:54 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:17.218 11:24:54 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:17.218 11:24:54 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:17.218 11:24:54 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:06:17.218 11:24:54 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:06:17.218 11:24:54 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:17.218 11:24:54 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:17.218 11:24:54 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:17.218 11:24:54 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:17.218 11:24:54 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:17.218 11:24:54 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:17.218 11:24:54 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:17.218 11:24:54 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf 00:06:17.218 11:24:54 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:06:27.187 Initializing NVMe Controllers 00:06:27.187 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:06:27.187 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:06:27.187 Initialization complete. Launching workers. 00:06:27.187 ======================================================== 00:06:27.187 Latency(us) 00:06:27.187 Device Information : IOPS MiB/s Average min max 00:06:27.187 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 14702.40 57.43 4352.79 739.56 22424.51 00:06:27.187 ======================================================== 00:06:27.187 Total : 14702.40 57.43 4352.79 739.56 22424.51 00:06:27.187 00:06:27.187 11:25:04 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:06:27.187 11:25:04 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:06:27.187 11:25:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@488 -- # nvmfcleanup 00:06:27.187 11:25:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@117 -- # sync 00:06:27.445 11:25:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:06:27.445 11:25:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@120 -- # set +e 00:06:27.445 11:25:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@121 -- # for i in {1..20} 00:06:27.445 11:25:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:06:27.445 rmmod nvme_tcp 00:06:27.445 rmmod nvme_fabrics 00:06:27.445 rmmod nvme_keyring 00:06:27.445 11:25:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:06:27.445 11:25:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@124 -- # set -e 00:06:27.445 11:25:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@125 -- # return 0 00:06:27.445 11:25:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@489 -- # '[' -n 65006 ']' 00:06:27.445 11:25:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@490 -- # killprocess 65006 00:06:27.445 11:25:04 nvmf_tcp.nvmf_example -- common/autotest_common.sh@948 -- # '[' -z 65006 ']' 00:06:27.445 11:25:04 nvmf_tcp.nvmf_example -- common/autotest_common.sh@952 -- # kill -0 65006 00:06:27.445 11:25:04 nvmf_tcp.nvmf_example -- common/autotest_common.sh@953 -- # uname 00:06:27.445 11:25:04 nvmf_tcp.nvmf_example -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:27.445 11:25:04 nvmf_tcp.nvmf_example -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 65006 00:06:27.445 11:25:04 nvmf_tcp.nvmf_example -- common/autotest_common.sh@954 -- # process_name=nvmf 00:06:27.445 11:25:04 nvmf_tcp.nvmf_example -- common/autotest_common.sh@958 -- # '[' nvmf = sudo ']' 00:06:27.445 killing process with pid 65006 00:06:27.445 11:25:04 nvmf_tcp.nvmf_example -- common/autotest_common.sh@966 -- # echo 'killing process with pid 65006' 00:06:27.445 11:25:04 nvmf_tcp.nvmf_example -- common/autotest_common.sh@967 -- # kill 65006 00:06:27.445 11:25:04 nvmf_tcp.nvmf_example -- common/autotest_common.sh@972 -- # wait 65006 00:06:27.445 nvmf threads initialize successfully 00:06:27.445 bdev subsystem init successfully 00:06:27.445 created a nvmf target service 00:06:27.445 create targets's poll groups done 00:06:27.445 all subsystems of target started 00:06:27.445 nvmf target is running 00:06:27.445 all subsystems of target stopped 00:06:27.445 destroy targets's poll groups done 00:06:27.445 destroyed the nvmf target service 00:06:27.445 bdev subsystem finish successfully 00:06:27.445 nvmf threads destroy successfully 00:06:27.445 11:25:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:06:27.445 11:25:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:06:27.445 11:25:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:06:27.445 11:25:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:06:27.445 11:25:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@278 -- # remove_spdk_ns 00:06:27.445 11:25:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:27.445 11:25:04 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:27.445 11:25:04 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:27.445 11:25:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:06:27.445 11:25:04 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:06:27.445 11:25:04 nvmf_tcp.nvmf_example -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:27.445 11:25:04 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:27.706 00:06:27.706 real 0m12.249s 00:06:27.706 user 0m44.208s 00:06:27.706 sys 0m1.906s 00:06:27.706 11:25:04 nvmf_tcp.nvmf_example -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:27.706 11:25:04 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:27.706 ************************************ 00:06:27.706 END TEST nvmf_example 00:06:27.706 ************************************ 00:06:27.706 11:25:04 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:06:27.706 11:25:04 nvmf_tcp -- nvmf/nvmf.sh@24 -- # run_test nvmf_filesystem /home/vagrant/spdk_repo/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:06:27.706 11:25:04 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:06:27.706 11:25:04 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:27.706 11:25:04 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:27.706 ************************************ 00:06:27.706 START TEST nvmf_filesystem 00:06:27.706 ************************************ 00:06:27.706 11:25:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:06:27.706 * Looking for test storage... 00:06:27.706 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:06:27.706 11:25:05 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh 00:06:27.706 11:25:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:06:27.706 11:25:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:06:27.706 11:25:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:06:27.706 11:25:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:06:27.706 11:25:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:06:27.706 11:25:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@39 -- # '[' -z /home/vagrant/spdk_repo/spdk/../output ']' 00:06:27.706 11:25:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@44 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]] 00:06:27.706 11:25:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh 00:06:27.706 11:25:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:06:27.706 11:25:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:06:27.706 11:25:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:06:27.706 11:25:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:06:27.706 11:25:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=y 00:06:27.706 11:25:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:06:27.706 11:25:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:06:27.706 11:25:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:06:27.706 11:25:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:06:27.706 11:25:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:06:27.706 11:25:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:06:27.706 11:25:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:06:27.706 11:25:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:06:27.706 11:25:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:06:27.706 11:25:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:06:27.706 11:25:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:06:27.706 11:25:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:06:27.706 11:25:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:06:27.706 11:25:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:06:27.706 11:25:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:06:27.706 11:25:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:06:27.706 11:25:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_CET=n 00:06:27.706 11:25:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:06:27.706 11:25:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:06:27.706 11:25:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:06:27.706 11:25:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:06:27.706 11:25:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:06:27.706 11:25:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:06:27.706 11:25:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:06:27.706 11:25:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:06:27.706 11:25:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:06:27.706 11:25:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:06:27.706 11:25:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:06:27.706 11:25:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:06:27.706 11:25:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:06:27.706 11:25:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build 00:06:27.706 11:25:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:06:27.706 11:25:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:06:27.706 11:25:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:06:27.706 11:25:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:06:27.706 11:25:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR= 00:06:27.706 11:25:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:06:27.706 11:25:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:06:27.706 11:25:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:06:27.706 11:25:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:06:27.706 11:25:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_DPDK_UADK=n 00:06:27.707 11:25:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_COVERAGE=y 00:06:27.707 11:25:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_RDMA=y 00:06:27.707 11:25:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:06:27.707 11:25:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_URING_PATH= 00:06:27.707 11:25:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_XNVME=n 00:06:27.707 11:25:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_VFIO_USER=n 00:06:27.707 11:25:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_ARCH=native 00:06:27.707 11:25:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_HAVE_EVP_MAC=y 00:06:27.707 11:25:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_URING_ZNS=n 00:06:27.707 11:25:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_WERROR=y 00:06:27.707 11:25:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_HAVE_LIBBSD=n 00:06:27.707 11:25:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_UBSAN=y 00:06:27.707 11:25:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_IPSEC_MB_DIR= 00:06:27.707 11:25:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_GOLANG=y 00:06:27.707 11:25:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_ISAL=y 00:06:27.707 11:25:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_IDXD_KERNEL=y 00:06:27.707 11:25:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_DPDK_LIB_DIR= 00:06:27.707 11:25:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_RDMA_PROV=verbs 00:06:27.707 11:25:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_APPS=y 00:06:27.707 11:25:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_SHARED=y 00:06:27.707 11:25:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_HAVE_KEYUTILS=y 00:06:27.707 11:25:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_FC_PATH= 00:06:27.707 11:25:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_DPDK_PKG_CONFIG=n 00:06:27.707 11:25:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_FC=n 00:06:27.707 11:25:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_AVAHI=y 00:06:27.707 11:25:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_FIO_PLUGIN=y 00:06:27.707 11:25:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_RAID5F=n 00:06:27.707 11:25:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_EXAMPLES=y 00:06:27.707 11:25:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_TESTS=y 00:06:27.707 11:25:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_CRYPTO_MLX5=n 00:06:27.707 11:25:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_MAX_LCORES=128 00:06:27.707 11:25:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_IPSEC_MB=n 00:06:27.707 11:25:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_PGO_DIR= 00:06:27.707 11:25:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_DEBUG=y 00:06:27.707 11:25:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_DPDK_COMPRESSDEV=n 00:06:27.707 11:25:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_CROSS_PREFIX= 00:06:27.707 11:25:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_URING=n 00:06:27.707 11:25:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:06:27.707 11:25:05 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:06:27.707 11:25:05 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common 00:06:27.707 11:25:05 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # _root=/home/vagrant/spdk_repo/spdk/test/common 00:06:27.707 11:25:05 nvmf_tcp.nvmf_filesystem -- common/applications.sh@9 -- # _root=/home/vagrant/spdk_repo/spdk 00:06:27.707 11:25:05 nvmf_tcp.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/home/vagrant/spdk_repo/spdk/build/bin 00:06:27.707 11:25:05 nvmf_tcp.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/home/vagrant/spdk_repo/spdk/test/app 00:06:27.707 11:25:05 nvmf_tcp.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/home/vagrant/spdk_repo/spdk/build/examples 00:06:27.707 11:25:05 nvmf_tcp.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:06:27.707 11:25:05 nvmf_tcp.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:06:27.707 11:25:05 nvmf_tcp.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:06:27.707 11:25:05 nvmf_tcp.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:06:27.707 11:25:05 nvmf_tcp.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:06:27.707 11:25:05 nvmf_tcp.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:06:27.707 11:25:05 nvmf_tcp.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /home/vagrant/spdk_repo/spdk/include/spdk/config.h ]] 00:06:27.707 11:25:05 nvmf_tcp.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:06:27.707 #define SPDK_CONFIG_H 00:06:27.707 #define SPDK_CONFIG_APPS 1 00:06:27.707 #define SPDK_CONFIG_ARCH native 00:06:27.707 #undef SPDK_CONFIG_ASAN 00:06:27.707 #define SPDK_CONFIG_AVAHI 1 00:06:27.707 #undef SPDK_CONFIG_CET 00:06:27.707 #define SPDK_CONFIG_COVERAGE 1 00:06:27.707 #define SPDK_CONFIG_CROSS_PREFIX 00:06:27.707 #undef SPDK_CONFIG_CRYPTO 00:06:27.707 #undef SPDK_CONFIG_CRYPTO_MLX5 00:06:27.707 #undef SPDK_CONFIG_CUSTOMOCF 00:06:27.707 #undef SPDK_CONFIG_DAOS 00:06:27.707 #define SPDK_CONFIG_DAOS_DIR 00:06:27.707 #define SPDK_CONFIG_DEBUG 1 00:06:27.707 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:06:27.707 #define SPDK_CONFIG_DPDK_DIR /home/vagrant/spdk_repo/spdk/dpdk/build 00:06:27.707 #define SPDK_CONFIG_DPDK_INC_DIR 00:06:27.707 #define SPDK_CONFIG_DPDK_LIB_DIR 00:06:27.707 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:06:27.707 #undef SPDK_CONFIG_DPDK_UADK 00:06:27.707 #define SPDK_CONFIG_ENV /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:06:27.707 #define SPDK_CONFIG_EXAMPLES 1 00:06:27.707 #undef SPDK_CONFIG_FC 00:06:27.707 #define SPDK_CONFIG_FC_PATH 00:06:27.707 #define SPDK_CONFIG_FIO_PLUGIN 1 00:06:27.707 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:06:27.707 #undef SPDK_CONFIG_FUSE 00:06:27.707 #undef SPDK_CONFIG_FUZZER 00:06:27.707 #define SPDK_CONFIG_FUZZER_LIB 00:06:27.707 #define SPDK_CONFIG_GOLANG 1 00:06:27.707 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:06:27.707 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:06:27.707 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:06:27.707 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:06:27.707 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:06:27.707 #undef SPDK_CONFIG_HAVE_LIBBSD 00:06:27.707 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:06:27.707 #define SPDK_CONFIG_IDXD 1 00:06:27.707 #define SPDK_CONFIG_IDXD_KERNEL 1 00:06:27.707 #undef SPDK_CONFIG_IPSEC_MB 00:06:27.707 #define SPDK_CONFIG_IPSEC_MB_DIR 00:06:27.707 #define SPDK_CONFIG_ISAL 1 00:06:27.707 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:06:27.707 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:06:27.707 #define SPDK_CONFIG_LIBDIR 00:06:27.707 #undef SPDK_CONFIG_LTO 00:06:27.707 #define SPDK_CONFIG_MAX_LCORES 128 00:06:27.707 #define SPDK_CONFIG_NVME_CUSE 1 00:06:27.707 #undef SPDK_CONFIG_OCF 00:06:27.707 #define SPDK_CONFIG_OCF_PATH 00:06:27.707 #define SPDK_CONFIG_OPENSSL_PATH 00:06:27.707 #undef SPDK_CONFIG_PGO_CAPTURE 00:06:27.707 #define SPDK_CONFIG_PGO_DIR 00:06:27.707 #undef SPDK_CONFIG_PGO_USE 00:06:27.707 #define SPDK_CONFIG_PREFIX /usr/local 00:06:27.707 #undef SPDK_CONFIG_RAID5F 00:06:27.707 #undef SPDK_CONFIG_RBD 00:06:27.707 #define SPDK_CONFIG_RDMA 1 00:06:27.707 #define SPDK_CONFIG_RDMA_PROV verbs 00:06:27.707 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:06:27.707 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:06:27.707 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:06:27.707 #define SPDK_CONFIG_SHARED 1 00:06:27.707 #undef SPDK_CONFIG_SMA 00:06:27.707 #define SPDK_CONFIG_TESTS 1 00:06:27.707 #undef SPDK_CONFIG_TSAN 00:06:27.707 #define SPDK_CONFIG_UBLK 1 00:06:27.707 #define SPDK_CONFIG_UBSAN 1 00:06:27.707 #undef SPDK_CONFIG_UNIT_TESTS 00:06:27.707 #undef SPDK_CONFIG_URING 00:06:27.707 #define SPDK_CONFIG_URING_PATH 00:06:27.707 #undef SPDK_CONFIG_URING_ZNS 00:06:27.707 #define SPDK_CONFIG_USDT 1 00:06:27.707 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:06:27.707 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:06:27.707 #undef SPDK_CONFIG_VFIO_USER 00:06:27.707 #define SPDK_CONFIG_VFIO_USER_DIR 00:06:27.707 #define SPDK_CONFIG_VHOST 1 00:06:27.707 #define SPDK_CONFIG_VIRTIO 1 00:06:27.707 #undef SPDK_CONFIG_VTUNE 00:06:27.707 #define SPDK_CONFIG_VTUNE_DIR 00:06:27.707 #define SPDK_CONFIG_WERROR 1 00:06:27.707 #define SPDK_CONFIG_WPDK_DIR 00:06:27.707 #undef SPDK_CONFIG_XNVME 00:06:27.707 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:06:27.707 11:25:05 nvmf_tcp.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:06:27.707 11:25:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:27.707 11:25:05 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:27.707 11:25:05 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:27.707 11:25:05 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:27.707 11:25:05 nvmf_tcp.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:27.707 11:25:05 nvmf_tcp.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:27.707 11:25:05 nvmf_tcp.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:27.707 11:25:05 nvmf_tcp.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:06:27.708 11:25:05 nvmf_tcp.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:27.708 11:25:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@56 -- # source /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:06:27.708 11:25:05 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # dirname /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:06:27.708 11:25:05 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:06:27.708 11:25:05 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # _pmdir=/home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:06:27.708 11:25:05 nvmf_tcp.nvmf_filesystem -- pm/common@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm/../../../ 00:06:27.708 11:25:05 nvmf_tcp.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/home/vagrant/spdk_repo/spdk 00:06:27.708 11:25:05 nvmf_tcp.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:06:27.708 11:25:05 nvmf_tcp.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/home/vagrant/spdk_repo/spdk/.run_test_name 00:06:27.708 11:25:05 nvmf_tcp.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/home/vagrant/spdk_repo/spdk/../output/power 00:06:27.708 11:25:05 nvmf_tcp.nvmf_filesystem -- pm/common@68 -- # uname -s 00:06:27.708 11:25:05 nvmf_tcp.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:06:27.708 11:25:05 nvmf_tcp.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:06:27.708 11:25:05 nvmf_tcp.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:06:27.708 11:25:05 nvmf_tcp.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:06:27.708 11:25:05 nvmf_tcp.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:06:27.708 11:25:05 nvmf_tcp.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:06:27.708 11:25:05 nvmf_tcp.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:06:27.708 11:25:05 nvmf_tcp.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:06:27.708 11:25:05 nvmf_tcp.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:06:27.708 11:25:05 nvmf_tcp.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:06:27.708 11:25:05 nvmf_tcp.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:06:27.708 11:25:05 nvmf_tcp.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:06:27.708 11:25:05 nvmf_tcp.nvmf_filesystem -- pm/common@81 -- # [[ QEMU != QEMU ]] 00:06:27.708 11:25:05 nvmf_tcp.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /home/vagrant/spdk_repo/spdk/../output/power ]] 00:06:27.708 11:25:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@58 -- # : 0 00:06:27.708 11:25:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:06:27.708 11:25:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@62 -- # : 0 00:06:27.708 11:25:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:06:27.708 11:25:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@64 -- # : 0 00:06:27.708 11:25:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:06:27.708 11:25:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@66 -- # : 1 00:06:27.708 11:25:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:06:27.708 11:25:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@68 -- # : 0 00:06:27.708 11:25:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:06:27.708 11:25:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@70 -- # : 00:06:27.708 11:25:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:06:27.708 11:25:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@72 -- # : 0 00:06:27.708 11:25:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:06:27.708 11:25:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@74 -- # : 0 00:06:27.708 11:25:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:06:27.708 11:25:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@76 -- # : 0 00:06:27.708 11:25:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:06:27.708 11:25:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@78 -- # : 0 00:06:27.708 11:25:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:06:27.708 11:25:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@80 -- # : 0 00:06:27.708 11:25:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:06:27.708 11:25:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@82 -- # : 0 00:06:27.708 11:25:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:06:27.708 11:25:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@84 -- # : 0 00:06:27.708 11:25:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:06:27.708 11:25:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@86 -- # : 0 00:06:27.708 11:25:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:06:27.708 11:25:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@88 -- # : 0 00:06:27.708 11:25:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:06:27.708 11:25:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@90 -- # : 0 00:06:27.708 11:25:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:06:27.708 11:25:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@92 -- # : 1 00:06:27.708 11:25:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:06:27.708 11:25:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@94 -- # : 0 00:06:27.708 11:25:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:06:27.708 11:25:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@96 -- # : 0 00:06:27.708 11:25:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:06:27.708 11:25:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@98 -- # : 0 00:06:27.708 11:25:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:06:27.708 11:25:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@100 -- # : 0 00:06:27.708 11:25:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:06:27.708 11:25:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@102 -- # : tcp 00:06:27.708 11:25:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:06:27.708 11:25:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@104 -- # : 0 00:06:27.708 11:25:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:06:27.708 11:25:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@106 -- # : 0 00:06:27.708 11:25:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:06:27.708 11:25:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@108 -- # : 0 00:06:27.708 11:25:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:06:27.708 11:25:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@110 -- # : 0 00:06:27.708 11:25:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@111 -- # export SPDK_TEST_IOAT 00:06:27.708 11:25:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@112 -- # : 0 00:06:27.708 11:25:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@113 -- # export SPDK_TEST_BLOBFS 00:06:27.708 11:25:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@114 -- # : 0 00:06:27.708 11:25:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@115 -- # export SPDK_TEST_VHOST_INIT 00:06:27.708 11:25:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@116 -- # : 0 00:06:27.708 11:25:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@117 -- # export SPDK_TEST_LVOL 00:06:27.708 11:25:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@118 -- # : 0 00:06:27.708 11:25:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@119 -- # export SPDK_TEST_VBDEV_COMPRESS 00:06:27.708 11:25:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@120 -- # : 0 00:06:27.708 11:25:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@121 -- # export SPDK_RUN_ASAN 00:06:27.708 11:25:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@122 -- # : 1 00:06:27.708 11:25:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@123 -- # export SPDK_RUN_UBSAN 00:06:27.708 11:25:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@124 -- # : 00:06:27.708 11:25:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@125 -- # export SPDK_RUN_EXTERNAL_DPDK 00:06:27.708 11:25:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@126 -- # : 0 00:06:27.708 11:25:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@127 -- # export SPDK_RUN_NON_ROOT 00:06:27.708 11:25:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@128 -- # : 0 00:06:27.708 11:25:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@129 -- # export SPDK_TEST_CRYPTO 00:06:27.708 11:25:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@130 -- # : 0 00:06:27.708 11:25:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@131 -- # export SPDK_TEST_FTL 00:06:27.708 11:25:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@132 -- # : 0 00:06:27.708 11:25:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@133 -- # export SPDK_TEST_OCF 00:06:27.708 11:25:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@134 -- # : 0 00:06:27.708 11:25:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@135 -- # export SPDK_TEST_VMD 00:06:27.708 11:25:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@136 -- # : 0 00:06:27.708 11:25:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@137 -- # export SPDK_TEST_OPAL 00:06:27.708 11:25:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@138 -- # : 00:06:27.708 11:25:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@139 -- # export SPDK_TEST_NATIVE_DPDK 00:06:27.708 11:25:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@140 -- # : true 00:06:27.708 11:25:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@141 -- # export SPDK_AUTOTEST_X 00:06:27.708 11:25:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@142 -- # : 0 00:06:27.708 11:25:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@143 -- # export SPDK_TEST_RAID5 00:06:27.708 11:25:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@144 -- # : 0 00:06:27.708 11:25:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:06:27.708 11:25:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@146 -- # : 1 00:06:27.708 11:25:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:06:27.708 11:25:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@148 -- # : 0 00:06:27.708 11:25:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:06:27.708 11:25:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@150 -- # : 0 00:06:27.708 11:25:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:06:27.708 11:25:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@152 -- # : 0 00:06:27.708 11:25:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:06:27.708 11:25:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@154 -- # : 00:06:27.708 11:25:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:06:27.708 11:25:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@156 -- # : 0 00:06:27.708 11:25:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:06:27.708 11:25:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@158 -- # : 0 00:06:27.708 11:25:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:06:27.708 11:25:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@160 -- # : 0 00:06:27.708 11:25:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:06:27.708 11:25:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@162 -- # : 0 00:06:27.708 11:25:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL_DSA 00:06:27.708 11:25:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@164 -- # : 0 00:06:27.709 11:25:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_IAA 00:06:27.709 11:25:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@167 -- # : 00:06:27.709 11:25:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@168 -- # export SPDK_TEST_FUZZER_TARGET 00:06:27.709 11:25:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@169 -- # : 1 00:06:27.709 11:25:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@170 -- # export SPDK_TEST_NVMF_MDNS 00:06:27.709 11:25:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@171 -- # : 1 00:06:27.709 11:25:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@172 -- # export SPDK_JSONRPC_GO_CLIENT 00:06:27.709 11:25:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@175 -- # export SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:06:27.709 11:25:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@175 -- # SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:06:27.709 11:25:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@176 -- # export DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib 00:06:27.709 11:25:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@176 -- # DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib 00:06:27.709 11:25:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@177 -- # export VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:06:27.709 11:25:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@177 -- # VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:06:27.709 11:25:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@178 -- # export LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:06:27.709 11:25:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@178 -- # LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:06:27.709 11:25:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@181 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:06:27.709 11:25:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@181 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:06:27.709 11:25:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@185 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:06:27.709 11:25:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@185 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:06:27.709 11:25:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@189 -- # export PYTHONDONTWRITEBYTECODE=1 00:06:27.709 11:25:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@189 -- # PYTHONDONTWRITEBYTECODE=1 00:06:27.709 11:25:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@193 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:06:27.709 11:25:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@193 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:06:27.709 11:25:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@194 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:06:27.709 11:25:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@194 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:06:27.709 11:25:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@198 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:06:27.709 11:25:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@199 -- # rm -rf /var/tmp/asan_suppression_file 00:06:27.709 11:25:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@200 -- # cat 00:06:27.709 11:25:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@236 -- # echo leak:libfuse3.so 00:06:27.709 11:25:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@238 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:06:27.709 11:25:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@238 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:06:27.709 11:25:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@240 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:06:27.709 11:25:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@240 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:06:27.709 11:25:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@242 -- # '[' -z /var/spdk/dependencies ']' 00:06:27.709 11:25:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@245 -- # export DEPENDENCY_DIR 00:06:27.709 11:25:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@249 -- # export SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:06:27.709 11:25:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@249 -- # SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:06:27.709 11:25:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@250 -- # export SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:06:27.709 11:25:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@250 -- # SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:06:27.709 11:25:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@253 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:06:27.709 11:25:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@253 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:06:27.709 11:25:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@254 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:06:27.709 11:25:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@254 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:06:27.709 11:25:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@256 -- # export AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:06:27.709 11:25:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@256 -- # AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:06:27.709 11:25:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@259 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:06:27.709 11:25:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@259 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:06:27.709 11:25:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@262 -- # '[' 0 -eq 0 ']' 00:06:27.709 11:25:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@263 -- # export valgrind= 00:06:27.709 11:25:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@263 -- # valgrind= 00:06:27.709 11:25:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@269 -- # uname -s 00:06:27.709 11:25:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@269 -- # '[' Linux = Linux ']' 00:06:27.709 11:25:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@270 -- # HUGEMEM=4096 00:06:27.709 11:25:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@271 -- # export CLEAR_HUGE=yes 00:06:27.709 11:25:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@271 -- # CLEAR_HUGE=yes 00:06:27.709 11:25:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@272 -- # [[ 0 -eq 1 ]] 00:06:27.709 11:25:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@272 -- # [[ 0 -eq 1 ]] 00:06:27.709 11:25:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@279 -- # MAKE=make 00:06:27.709 11:25:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@280 -- # MAKEFLAGS=-j10 00:06:27.709 11:25:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@296 -- # export HUGEMEM=4096 00:06:27.709 11:25:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@296 -- # HUGEMEM=4096 00:06:27.709 11:25:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@298 -- # NO_HUGE=() 00:06:27.709 11:25:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@299 -- # TEST_MODE= 00:06:27.709 11:25:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@300 -- # for i in "$@" 00:06:27.709 11:25:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@301 -- # case "$i" in 00:06:27.709 11:25:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@306 -- # TEST_TRANSPORT=tcp 00:06:27.709 11:25:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@318 -- # [[ -z 65257 ]] 00:06:27.709 11:25:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@318 -- # kill -0 65257 00:06:27.968 11:25:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1680 -- # set_test_storage 2147483648 00:06:27.968 11:25:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@328 -- # [[ -v testdir ]] 00:06:27.968 11:25:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@330 -- # local requested_size=2147483648 00:06:27.969 11:25:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@331 -- # local mount target_dir 00:06:27.969 11:25:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@333 -- # local -A mounts fss sizes avails uses 00:06:27.969 11:25:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@334 -- # local source fs size avail mount use 00:06:27.969 11:25:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@336 -- # local storage_fallback storage_candidates 00:06:27.969 11:25:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@338 -- # mktemp -udt spdk.XXXXXX 00:06:27.969 11:25:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@338 -- # storage_fallback=/tmp/spdk.NSS9OY 00:06:27.969 11:25:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@343 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:06:27.969 11:25:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@345 -- # [[ -n '' ]] 00:06:27.969 11:25:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@350 -- # [[ -n '' ]] 00:06:27.969 11:25:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@355 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/nvmf/target /tmp/spdk.NSS9OY/tests/target /tmp/spdk.NSS9OY 00:06:27.969 11:25:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@358 -- # requested_size=2214592512 00:06:27.969 11:25:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:06:27.969 11:25:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@327 -- # df -T 00:06:27.969 11:25:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@327 -- # grep -v Filesystem 00:06:27.969 11:25:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=devtmpfs 00:06:27.969 11:25:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=devtmpfs 00:06:27.969 11:25:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=4194304 00:06:27.969 11:25:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=4194304 00:06:27.969 11:25:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=0 00:06:27.969 11:25:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:06:27.969 11:25:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:06:27.969 11:25:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:06:27.969 11:25:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=6264512512 00:06:27.969 11:25:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=6267887616 00:06:27.969 11:25:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=3375104 00:06:27.969 11:25:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:06:27.969 11:25:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:06:27.969 11:25:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:06:27.969 11:25:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=2494353408 00:06:27.969 11:25:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=2507157504 00:06:27.969 11:25:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=12804096 00:06:27.969 11:25:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:06:27.969 11:25:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=/dev/vda5 00:06:27.969 11:25:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=btrfs 00:06:27.969 11:25:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=13792743424 00:06:27.969 11:25:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=20314062848 00:06:27.969 11:25:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=5237698560 00:06:27.969 11:25:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:06:27.969 11:25:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=/dev/vda5 00:06:27.969 11:25:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=btrfs 00:06:27.969 11:25:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=13792743424 00:06:27.969 11:25:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=20314062848 00:06:27.969 11:25:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=5237698560 00:06:27.969 11:25:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:06:27.969 11:25:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=/dev/vda2 00:06:27.969 11:25:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=ext4 00:06:27.969 11:25:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=843546624 00:06:27.969 11:25:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=1012768768 00:06:27.969 11:25:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=100016128 00:06:27.969 11:25:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:06:27.969 11:25:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:06:27.969 11:25:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:06:27.969 11:25:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=6267756544 00:06:27.969 11:25:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=6267891712 00:06:27.969 11:25:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=135168 00:06:27.969 11:25:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:06:27.969 11:25:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=/dev/vda3 00:06:27.969 11:25:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=vfat 00:06:27.969 11:25:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=92499968 00:06:27.969 11:25:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=104607744 00:06:27.969 11:25:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=12107776 00:06:27.969 11:25:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:06:27.969 11:25:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:06:27.969 11:25:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:06:27.969 11:25:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=1253572608 00:06:27.969 11:25:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=1253576704 00:06:27.969 11:25:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=4096 00:06:27.969 11:25:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:06:27.969 11:25:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=:/mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-vg-autotest/fedora38-libvirt/output 00:06:27.969 11:25:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=fuse.sshfs 00:06:27.969 11:25:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=94682509312 00:06:27.969 11:25:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=105088212992 00:06:27.969 11:25:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=5020270592 00:06:27.969 11:25:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:06:27.969 11:25:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@366 -- # printf '* Looking for test storage...\n' 00:06:27.969 * Looking for test storage... 00:06:27.969 11:25:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@368 -- # local target_space new_size 00:06:27.969 11:25:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@369 -- # for target_dir in "${storage_candidates[@]}" 00:06:27.969 11:25:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@372 -- # df /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:06:27.969 11:25:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@372 -- # awk '$1 !~ /Filesystem/{print $6}' 00:06:27.969 11:25:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@372 -- # mount=/home 00:06:27.969 11:25:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@374 -- # target_space=13792743424 00:06:27.969 11:25:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@375 -- # (( target_space == 0 || target_space < requested_size )) 00:06:27.969 11:25:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@378 -- # (( target_space >= requested_size )) 00:06:27.969 11:25:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@380 -- # [[ btrfs == tmpfs ]] 00:06:27.970 11:25:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@380 -- # [[ btrfs == ramfs ]] 00:06:27.970 11:25:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@380 -- # [[ /home == / ]] 00:06:27.970 11:25:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@387 -- # export SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/nvmf/target 00:06:27.970 11:25:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@387 -- # SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/nvmf/target 00:06:27.970 11:25:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@388 -- # printf '* Found test storage at %s\n' /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:06:27.970 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:06:27.970 11:25:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@389 -- # return 0 00:06:27.970 11:25:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1682 -- # set -o errtrace 00:06:27.970 11:25:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1683 -- # shopt -s extdebug 00:06:27.970 11:25:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1684 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:06:27.970 11:25:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1686 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:06:27.970 11:25:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1687 -- # true 00:06:27.970 11:25:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1689 -- # xtrace_fd 00:06:27.970 11:25:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 14 ]] 00:06:27.970 11:25:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/14 ]] 00:06:27.970 11:25:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:06:27.970 11:25:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:06:27.970 11:25:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:06:27.970 11:25:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:06:27.970 11:25:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:06:27.970 11:25:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:06:27.970 11:25:05 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:06:27.970 11:25:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:06:27.970 11:25:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:27.970 11:25:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:27.970 11:25:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:27.970 11:25:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:27.970 11:25:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:27.970 11:25:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:27.970 11:25:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:27.970 11:25:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:27.970 11:25:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:27.970 11:25:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:27.970 11:25:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:891080d4-f96c-4735-b9e2-e3ce9892e421 00:06:27.970 11:25:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=891080d4-f96c-4735-b9e2-e3ce9892e421 00:06:27.970 11:25:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:27.970 11:25:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:27.970 11:25:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:06:27.970 11:25:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:27.970 11:25:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:27.970 11:25:05 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:27.970 11:25:05 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:27.970 11:25:05 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:27.970 11:25:05 nvmf_tcp.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:27.970 11:25:05 nvmf_tcp.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:27.970 11:25:05 nvmf_tcp.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:27.970 11:25:05 nvmf_tcp.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:06:27.970 11:25:05 nvmf_tcp.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:27.970 11:25:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@47 -- # : 0 00:06:27.970 11:25:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:27.970 11:25:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:27.970 11:25:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:27.970 11:25:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:27.970 11:25:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:27.970 11:25:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:27.970 11:25:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:27.970 11:25:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:27.970 11:25:05 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:06:27.970 11:25:05 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:06:27.970 11:25:05 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:06:27.970 11:25:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:06:27.970 11:25:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:27.970 11:25:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:06:27.970 11:25:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:06:27.970 11:25:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:06:27.970 11:25:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:27.970 11:25:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:27.970 11:25:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:27.970 11:25:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:06:27.970 11:25:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:06:27.970 11:25:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:06:27.970 11:25:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:06:27.970 11:25:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:06:27.970 11:25:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@432 -- # nvmf_veth_init 00:06:27.970 11:25:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:27.970 11:25:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:27.970 11:25:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:06:27.970 11:25:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:06:27.970 11:25:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:06:27.971 11:25:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:06:27.971 11:25:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:06:27.971 11:25:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:27.971 11:25:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:06:27.971 11:25:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:06:27.971 11:25:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:06:27.971 11:25:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:06:27.971 11:25:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:06:27.971 11:25:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:06:27.971 Cannot find device "nvmf_tgt_br" 00:06:27.971 11:25:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@155 -- # true 00:06:27.971 11:25:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:06:27.971 Cannot find device "nvmf_tgt_br2" 00:06:27.971 11:25:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@156 -- # true 00:06:27.971 11:25:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:06:27.971 11:25:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:06:27.971 Cannot find device "nvmf_tgt_br" 00:06:27.971 11:25:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@158 -- # true 00:06:27.971 11:25:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:06:27.971 Cannot find device "nvmf_tgt_br2" 00:06:27.971 11:25:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@159 -- # true 00:06:27.971 11:25:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:06:27.971 11:25:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:06:27.971 11:25:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:06:27.971 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:06:27.971 11:25:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@162 -- # true 00:06:27.971 11:25:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:06:27.971 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:06:27.971 11:25:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@163 -- # true 00:06:27.971 11:25:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:06:27.971 11:25:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:06:27.971 11:25:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:06:27.971 11:25:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:06:27.971 11:25:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:06:27.971 11:25:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:06:27.971 11:25:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:06:28.230 11:25:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:06:28.230 11:25:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:06:28.230 11:25:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:06:28.230 11:25:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:06:28.230 11:25:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:06:28.230 11:25:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:06:28.230 11:25:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:06:28.230 11:25:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:06:28.230 11:25:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:06:28.230 11:25:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:06:28.230 11:25:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:06:28.230 11:25:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:06:28.230 11:25:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:06:28.230 11:25:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:06:28.230 11:25:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:06:28.230 11:25:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:06:28.230 11:25:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:06:28.230 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:28.230 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.081 ms 00:06:28.230 00:06:28.230 --- 10.0.0.2 ping statistics --- 00:06:28.230 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:28.230 rtt min/avg/max/mdev = 0.081/0.081/0.081/0.000 ms 00:06:28.230 11:25:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:06:28.230 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:06:28.230 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.077 ms 00:06:28.230 00:06:28.230 --- 10.0.0.3 ping statistics --- 00:06:28.230 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:28.230 rtt min/avg/max/mdev = 0.077/0.077/0.077/0.000 ms 00:06:28.230 11:25:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:06:28.230 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:28.230 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.034 ms 00:06:28.230 00:06:28.230 --- 10.0.0.1 ping statistics --- 00:06:28.230 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:28.230 rtt min/avg/max/mdev = 0.034/0.034/0.034/0.000 ms 00:06:28.230 11:25:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:28.230 11:25:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@433 -- # return 0 00:06:28.230 11:25:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:06:28.230 11:25:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:28.230 11:25:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:06:28.230 11:25:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:06:28.230 11:25:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:28.230 11:25:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:06:28.230 11:25:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:06:28.230 11:25:05 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:06:28.230 11:25:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:06:28.230 11:25:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:28.230 11:25:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:06:28.230 ************************************ 00:06:28.230 START TEST nvmf_filesystem_no_in_capsule 00:06:28.230 ************************************ 00:06:28.230 11:25:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1123 -- # nvmf_filesystem_part 0 00:06:28.230 11:25:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:06:28.230 11:25:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:06:28.230 11:25:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:06:28.230 11:25:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:28.230 11:25:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:28.230 11:25:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=65416 00:06:28.230 11:25:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:06:28.230 11:25:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 65416 00:06:28.230 11:25:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@829 -- # '[' -z 65416 ']' 00:06:28.230 11:25:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:28.230 11:25:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:28.230 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:28.230 11:25:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:28.230 11:25:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:28.230 11:25:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:28.230 [2024-07-15 11:25:05.680123] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:06:28.230 [2024-07-15 11:25:05.680214] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:28.489 [2024-07-15 11:25:05.820124] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:28.489 [2024-07-15 11:25:05.929301] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:28.489 [2024-07-15 11:25:05.929676] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:28.489 [2024-07-15 11:25:05.929884] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:28.489 [2024-07-15 11:25:05.930041] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:28.489 [2024-07-15 11:25:05.930198] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:28.489 [2024-07-15 11:25:05.930501] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:28.489 [2024-07-15 11:25:05.930616] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:28.489 [2024-07-15 11:25:05.930715] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:28.489 [2024-07-15 11:25:05.930735] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:28.747 11:25:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:28.747 11:25:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@862 -- # return 0 00:06:28.747 11:25:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:06:28.747 11:25:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:28.747 11:25:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:28.747 11:25:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:28.747 11:25:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:06:28.747 11:25:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:06:28.747 11:25:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:28.747 11:25:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:28.747 [2024-07-15 11:25:06.072443] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:28.747 11:25:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:28.747 11:25:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:06:28.747 11:25:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:28.747 11:25:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:28.747 Malloc1 00:06:28.747 11:25:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:28.747 11:25:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:06:28.747 11:25:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:28.747 11:25:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:28.747 11:25:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:28.747 11:25:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:06:28.747 11:25:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:28.747 11:25:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:28.747 11:25:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:28.747 11:25:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:28.747 11:25:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:28.747 11:25:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:28.747 [2024-07-15 11:25:06.203655] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:28.747 11:25:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:28.747 11:25:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:06:28.747 11:25:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1378 -- # local bdev_name=Malloc1 00:06:28.748 11:25:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1379 -- # local bdev_info 00:06:28.748 11:25:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1380 -- # local bs 00:06:28.748 11:25:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1381 -- # local nb 00:06:28.748 11:25:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:06:28.748 11:25:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:28.748 11:25:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:29.006 11:25:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:29.006 11:25:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:06:29.006 { 00:06:29.006 "aliases": [ 00:06:29.006 "6bade0df-3f01-44c5-9f54-d1dda80f3859" 00:06:29.006 ], 00:06:29.006 "assigned_rate_limits": { 00:06:29.006 "r_mbytes_per_sec": 0, 00:06:29.006 "rw_ios_per_sec": 0, 00:06:29.006 "rw_mbytes_per_sec": 0, 00:06:29.006 "w_mbytes_per_sec": 0 00:06:29.006 }, 00:06:29.006 "block_size": 512, 00:06:29.006 "claim_type": "exclusive_write", 00:06:29.006 "claimed": true, 00:06:29.006 "driver_specific": {}, 00:06:29.006 "memory_domains": [ 00:06:29.006 { 00:06:29.006 "dma_device_id": "system", 00:06:29.006 "dma_device_type": 1 00:06:29.006 }, 00:06:29.006 { 00:06:29.006 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:29.006 "dma_device_type": 2 00:06:29.006 } 00:06:29.006 ], 00:06:29.006 "name": "Malloc1", 00:06:29.006 "num_blocks": 1048576, 00:06:29.006 "product_name": "Malloc disk", 00:06:29.006 "supported_io_types": { 00:06:29.006 "abort": true, 00:06:29.006 "compare": false, 00:06:29.006 "compare_and_write": false, 00:06:29.006 "copy": true, 00:06:29.006 "flush": true, 00:06:29.006 "get_zone_info": false, 00:06:29.006 "nvme_admin": false, 00:06:29.006 "nvme_io": false, 00:06:29.006 "nvme_io_md": false, 00:06:29.006 "nvme_iov_md": false, 00:06:29.006 "read": true, 00:06:29.006 "reset": true, 00:06:29.006 "seek_data": false, 00:06:29.006 "seek_hole": false, 00:06:29.006 "unmap": true, 00:06:29.006 "write": true, 00:06:29.006 "write_zeroes": true, 00:06:29.006 "zcopy": true, 00:06:29.006 "zone_append": false, 00:06:29.006 "zone_management": false 00:06:29.006 }, 00:06:29.006 "uuid": "6bade0df-3f01-44c5-9f54-d1dda80f3859", 00:06:29.006 "zoned": false 00:06:29.006 } 00:06:29.006 ]' 00:06:29.006 11:25:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:06:29.006 11:25:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # bs=512 00:06:29.006 11:25:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:06:29.006 11:25:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # nb=1048576 00:06:29.006 11:25:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # bdev_size=512 00:06:29.006 11:25:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # echo 512 00:06:29.006 11:25:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:06:29.006 11:25:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:891080d4-f96c-4735-b9e2-e3ce9892e421 --hostid=891080d4-f96c-4735-b9e2-e3ce9892e421 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:06:29.352 11:25:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:06:29.352 11:25:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1198 -- # local i=0 00:06:29.352 11:25:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:06:29.352 11:25:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:06:29.352 11:25:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1205 -- # sleep 2 00:06:31.270 11:25:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:06:31.270 11:25:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:06:31.270 11:25:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:06:31.270 11:25:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:06:31.270 11:25:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:06:31.270 11:25:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1208 -- # return 0 00:06:31.270 11:25:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:06:31.270 11:25:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:06:31.270 11:25:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:06:31.270 11:25:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:06:31.270 11:25:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:06:31.270 11:25:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:06:31.270 11:25:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:06:31.270 11:25:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:06:31.270 11:25:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:06:31.270 11:25:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:06:31.270 11:25:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:06:31.270 11:25:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:06:31.270 11:25:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:06:32.646 11:25:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:06:32.646 11:25:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:06:32.646 11:25:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:06:32.646 11:25:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:32.646 11:25:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:32.646 ************************************ 00:06:32.646 START TEST filesystem_ext4 00:06:32.646 ************************************ 00:06:32.646 11:25:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create ext4 nvme0n1 00:06:32.646 11:25:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:06:32.646 11:25:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:06:32.646 11:25:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:06:32.646 11:25:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@924 -- # local fstype=ext4 00:06:32.646 11:25:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:06:32.646 11:25:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@926 -- # local i=0 00:06:32.646 11:25:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@927 -- # local force 00:06:32.646 11:25:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@929 -- # '[' ext4 = ext4 ']' 00:06:32.647 11:25:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@930 -- # force=-F 00:06:32.647 11:25:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@935 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:06:32.647 mke2fs 1.46.5 (30-Dec-2021) 00:06:32.647 Discarding device blocks: 0/522240 done 00:06:32.647 Creating filesystem with 522240 1k blocks and 130560 inodes 00:06:32.647 Filesystem UUID: a024d120-e059-4f34-8ae4-f0f218ad51cf 00:06:32.647 Superblock backups stored on blocks: 00:06:32.647 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:06:32.647 00:06:32.647 Allocating group tables: 0/64 done 00:06:32.647 Writing inode tables: 0/64 done 00:06:32.647 Creating journal (8192 blocks): done 00:06:32.647 Writing superblocks and filesystem accounting information: 0/64 done 00:06:32.647 00:06:32.647 11:25:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@943 -- # return 0 00:06:32.647 11:25:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:06:32.647 11:25:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:06:32.647 11:25:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:06:32.647 11:25:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:06:32.647 11:25:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:06:32.647 11:25:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:06:32.647 11:25:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:06:32.647 11:25:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 65416 00:06:32.647 11:25:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:06:32.647 11:25:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:06:32.647 11:25:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:06:32.647 11:25:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:06:32.647 00:06:32.647 real 0m0.366s 00:06:32.647 user 0m0.025s 00:06:32.647 sys 0m0.048s 00:06:32.647 11:25:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:32.647 11:25:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:06:32.647 ************************************ 00:06:32.647 END TEST filesystem_ext4 00:06:32.647 ************************************ 00:06:32.906 11:25:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:06:32.906 11:25:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:06:32.906 11:25:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:06:32.906 11:25:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:32.906 11:25:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:32.906 ************************************ 00:06:32.906 START TEST filesystem_btrfs 00:06:32.906 ************************************ 00:06:32.906 11:25:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create btrfs nvme0n1 00:06:32.906 11:25:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:06:32.906 11:25:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:06:32.906 11:25:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:06:32.906 11:25:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@924 -- # local fstype=btrfs 00:06:32.906 11:25:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:06:32.906 11:25:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@926 -- # local i=0 00:06:32.906 11:25:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@927 -- # local force 00:06:32.906 11:25:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@929 -- # '[' btrfs = ext4 ']' 00:06:32.906 11:25:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@932 -- # force=-f 00:06:32.906 11:25:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@935 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:06:32.906 btrfs-progs v6.6.2 00:06:32.906 See https://btrfs.readthedocs.io for more information. 00:06:32.906 00:06:32.906 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:06:32.906 NOTE: several default settings have changed in version 5.15, please make sure 00:06:32.906 this does not affect your deployments: 00:06:32.906 - DUP for metadata (-m dup) 00:06:32.906 - enabled no-holes (-O no-holes) 00:06:32.906 - enabled free-space-tree (-R free-space-tree) 00:06:32.906 00:06:32.906 Label: (null) 00:06:32.906 UUID: 0eb129fb-5b76-4de3-a1ab-71614bdaa3b5 00:06:32.906 Node size: 16384 00:06:32.906 Sector size: 4096 00:06:32.906 Filesystem size: 510.00MiB 00:06:32.906 Block group profiles: 00:06:32.906 Data: single 8.00MiB 00:06:32.906 Metadata: DUP 32.00MiB 00:06:32.906 System: DUP 8.00MiB 00:06:32.906 SSD detected: yes 00:06:32.906 Zoned device: no 00:06:32.906 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:06:32.906 Runtime features: free-space-tree 00:06:32.906 Checksum: crc32c 00:06:32.906 Number of devices: 1 00:06:32.906 Devices: 00:06:32.906 ID SIZE PATH 00:06:32.906 1 510.00MiB /dev/nvme0n1p1 00:06:32.906 00:06:32.906 11:25:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@943 -- # return 0 00:06:32.906 11:25:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:06:32.906 11:25:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:06:32.906 11:25:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:06:32.906 11:25:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:06:32.906 11:25:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:06:32.906 11:25:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:06:32.906 11:25:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:06:32.906 11:25:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 65416 00:06:32.906 11:25:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:06:32.906 11:25:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:06:32.906 11:25:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:06:32.906 11:25:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:06:32.906 00:06:32.906 real 0m0.192s 00:06:32.906 user 0m0.020s 00:06:32.906 sys 0m0.071s 00:06:32.906 11:25:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:32.906 11:25:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:06:32.906 ************************************ 00:06:32.906 END TEST filesystem_btrfs 00:06:32.906 ************************************ 00:06:32.906 11:25:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:06:32.906 11:25:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:06:32.906 11:25:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:06:32.906 11:25:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:32.906 11:25:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:32.906 ************************************ 00:06:32.906 START TEST filesystem_xfs 00:06:32.906 ************************************ 00:06:32.906 11:25:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create xfs nvme0n1 00:06:32.906 11:25:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:06:32.906 11:25:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:06:32.906 11:25:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:06:32.906 11:25:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@924 -- # local fstype=xfs 00:06:32.906 11:25:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:06:32.906 11:25:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@926 -- # local i=0 00:06:32.906 11:25:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@927 -- # local force 00:06:32.906 11:25:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@929 -- # '[' xfs = ext4 ']' 00:06:32.906 11:25:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@932 -- # force=-f 00:06:32.906 11:25:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@935 -- # mkfs.xfs -f /dev/nvme0n1p1 00:06:33.165 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:06:33.165 = sectsz=512 attr=2, projid32bit=1 00:06:33.165 = crc=1 finobt=1, sparse=1, rmapbt=0 00:06:33.165 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:06:33.165 data = bsize=4096 blocks=130560, imaxpct=25 00:06:33.165 = sunit=0 swidth=0 blks 00:06:33.165 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:06:33.165 log =internal log bsize=4096 blocks=16384, version=2 00:06:33.165 = sectsz=512 sunit=0 blks, lazy-count=1 00:06:33.165 realtime =none extsz=4096 blocks=0, rtextents=0 00:06:33.734 Discarding blocks...Done. 00:06:33.734 11:25:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@943 -- # return 0 00:06:33.734 11:25:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:06:36.262 11:25:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:06:36.263 11:25:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:06:36.263 11:25:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:06:36.263 11:25:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:06:36.263 11:25:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:06:36.263 11:25:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:06:36.263 11:25:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 65416 00:06:36.263 11:25:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:06:36.263 11:25:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:06:36.263 11:25:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:06:36.263 11:25:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:06:36.263 00:06:36.263 real 0m3.210s 00:06:36.263 user 0m0.019s 00:06:36.263 sys 0m0.052s 00:06:36.263 11:25:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:36.263 11:25:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:06:36.263 ************************************ 00:06:36.263 END TEST filesystem_xfs 00:06:36.263 ************************************ 00:06:36.263 11:25:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:06:36.263 11:25:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:06:36.263 11:25:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:06:36.263 11:25:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:06:36.263 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:06:36.263 11:25:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:06:36.263 11:25:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1219 -- # local i=0 00:06:36.263 11:25:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:06:36.263 11:25:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:06:36.263 11:25:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:06:36.263 11:25:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:06:36.263 11:25:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # return 0 00:06:36.263 11:25:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:06:36.263 11:25:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:36.263 11:25:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:36.263 11:25:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:36.263 11:25:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:06:36.263 11:25:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 65416 00:06:36.263 11:25:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@948 -- # '[' -z 65416 ']' 00:06:36.263 11:25:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@952 -- # kill -0 65416 00:06:36.263 11:25:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@953 -- # uname 00:06:36.263 11:25:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:36.263 11:25:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 65416 00:06:36.263 11:25:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:36.263 11:25:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:36.263 11:25:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@966 -- # echo 'killing process with pid 65416' 00:06:36.263 killing process with pid 65416 00:06:36.263 11:25:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@967 -- # kill 65416 00:06:36.263 11:25:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@972 -- # wait 65416 00:06:36.827 11:25:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:06:36.827 00:06:36.827 real 0m8.387s 00:06:36.827 user 0m31.334s 00:06:36.827 sys 0m1.477s 00:06:36.827 11:25:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:36.827 11:25:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:36.827 ************************************ 00:06:36.827 END TEST nvmf_filesystem_no_in_capsule 00:06:36.827 ************************************ 00:06:36.827 11:25:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1142 -- # return 0 00:06:36.827 11:25:14 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:06:36.827 11:25:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:06:36.827 11:25:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:36.827 11:25:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:06:36.827 ************************************ 00:06:36.827 START TEST nvmf_filesystem_in_capsule 00:06:36.827 ************************************ 00:06:36.827 11:25:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1123 -- # nvmf_filesystem_part 4096 00:06:36.827 11:25:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:06:36.827 11:25:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:06:36.827 11:25:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:06:36.827 11:25:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:36.827 11:25:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:36.827 11:25:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=65716 00:06:36.827 11:25:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:06:36.827 11:25:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 65716 00:06:36.828 11:25:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@829 -- # '[' -z 65716 ']' 00:06:36.828 11:25:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:36.828 11:25:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:36.828 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:36.828 11:25:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:36.828 11:25:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:36.828 11:25:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:36.828 [2024-07-15 11:25:14.109406] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:06:36.828 [2024-07-15 11:25:14.109506] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:36.828 [2024-07-15 11:25:14.249678] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:37.085 [2024-07-15 11:25:14.309395] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:37.085 [2024-07-15 11:25:14.309453] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:37.085 [2024-07-15 11:25:14.309466] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:37.085 [2024-07-15 11:25:14.309474] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:37.085 [2024-07-15 11:25:14.309482] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:37.085 [2024-07-15 11:25:14.309628] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:37.085 [2024-07-15 11:25:14.309940] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:37.085 [2024-07-15 11:25:14.310407] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:37.085 [2024-07-15 11:25:14.310443] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:37.650 11:25:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:37.650 11:25:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@862 -- # return 0 00:06:37.650 11:25:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:06:37.650 11:25:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:37.650 11:25:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:37.650 11:25:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:37.650 11:25:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:06:37.650 11:25:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:06:37.650 11:25:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:37.650 11:25:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:37.650 [2024-07-15 11:25:15.099340] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:37.650 11:25:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:37.650 11:25:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:06:37.650 11:25:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:37.650 11:25:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:37.908 Malloc1 00:06:37.908 11:25:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:37.908 11:25:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:06:37.908 11:25:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:37.908 11:25:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:37.908 11:25:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:37.908 11:25:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:06:37.908 11:25:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:37.908 11:25:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:37.908 11:25:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:37.908 11:25:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:37.908 11:25:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:37.908 11:25:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:37.908 [2024-07-15 11:25:15.225853] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:37.908 11:25:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:37.908 11:25:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:06:37.908 11:25:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1378 -- # local bdev_name=Malloc1 00:06:37.908 11:25:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1379 -- # local bdev_info 00:06:37.908 11:25:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1380 -- # local bs 00:06:37.908 11:25:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1381 -- # local nb 00:06:37.908 11:25:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:06:37.908 11:25:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:37.908 11:25:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:37.908 11:25:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:37.908 11:25:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:06:37.908 { 00:06:37.908 "aliases": [ 00:06:37.908 "8cb66687-0536-4c77-8634-978e7557de45" 00:06:37.908 ], 00:06:37.908 "assigned_rate_limits": { 00:06:37.908 "r_mbytes_per_sec": 0, 00:06:37.908 "rw_ios_per_sec": 0, 00:06:37.908 "rw_mbytes_per_sec": 0, 00:06:37.908 "w_mbytes_per_sec": 0 00:06:37.908 }, 00:06:37.908 "block_size": 512, 00:06:37.908 "claim_type": "exclusive_write", 00:06:37.908 "claimed": true, 00:06:37.908 "driver_specific": {}, 00:06:37.908 "memory_domains": [ 00:06:37.908 { 00:06:37.908 "dma_device_id": "system", 00:06:37.908 "dma_device_type": 1 00:06:37.908 }, 00:06:37.908 { 00:06:37.908 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:37.908 "dma_device_type": 2 00:06:37.908 } 00:06:37.908 ], 00:06:37.908 "name": "Malloc1", 00:06:37.908 "num_blocks": 1048576, 00:06:37.908 "product_name": "Malloc disk", 00:06:37.908 "supported_io_types": { 00:06:37.908 "abort": true, 00:06:37.908 "compare": false, 00:06:37.908 "compare_and_write": false, 00:06:37.908 "copy": true, 00:06:37.908 "flush": true, 00:06:37.908 "get_zone_info": false, 00:06:37.908 "nvme_admin": false, 00:06:37.908 "nvme_io": false, 00:06:37.908 "nvme_io_md": false, 00:06:37.908 "nvme_iov_md": false, 00:06:37.908 "read": true, 00:06:37.908 "reset": true, 00:06:37.908 "seek_data": false, 00:06:37.908 "seek_hole": false, 00:06:37.908 "unmap": true, 00:06:37.908 "write": true, 00:06:37.908 "write_zeroes": true, 00:06:37.908 "zcopy": true, 00:06:37.908 "zone_append": false, 00:06:37.908 "zone_management": false 00:06:37.908 }, 00:06:37.908 "uuid": "8cb66687-0536-4c77-8634-978e7557de45", 00:06:37.908 "zoned": false 00:06:37.908 } 00:06:37.908 ]' 00:06:37.908 11:25:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:06:37.909 11:25:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # bs=512 00:06:37.909 11:25:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:06:37.909 11:25:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # nb=1048576 00:06:37.909 11:25:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # bdev_size=512 00:06:37.909 11:25:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # echo 512 00:06:37.909 11:25:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:06:37.909 11:25:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:891080d4-f96c-4735-b9e2-e3ce9892e421 --hostid=891080d4-f96c-4735-b9e2-e3ce9892e421 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:06:38.167 11:25:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:06:38.167 11:25:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1198 -- # local i=0 00:06:38.167 11:25:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:06:38.167 11:25:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:06:38.167 11:25:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1205 -- # sleep 2 00:06:40.106 11:25:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:06:40.106 11:25:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:06:40.106 11:25:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:06:40.106 11:25:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:06:40.106 11:25:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:06:40.106 11:25:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1208 -- # return 0 00:06:40.106 11:25:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:06:40.106 11:25:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:06:40.106 11:25:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:06:40.106 11:25:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:06:40.106 11:25:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:06:40.106 11:25:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:06:40.106 11:25:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:06:40.106 11:25:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:06:40.106 11:25:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:06:40.106 11:25:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:06:40.106 11:25:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:06:40.364 11:25:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:06:40.364 11:25:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:06:41.297 11:25:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:06:41.297 11:25:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:06:41.297 11:25:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:06:41.297 11:25:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:41.297 11:25:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:41.297 ************************************ 00:06:41.297 START TEST filesystem_in_capsule_ext4 00:06:41.297 ************************************ 00:06:41.297 11:25:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create ext4 nvme0n1 00:06:41.297 11:25:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:06:41.297 11:25:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:06:41.297 11:25:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:06:41.297 11:25:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@924 -- # local fstype=ext4 00:06:41.297 11:25:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:06:41.297 11:25:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@926 -- # local i=0 00:06:41.297 11:25:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@927 -- # local force 00:06:41.297 11:25:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@929 -- # '[' ext4 = ext4 ']' 00:06:41.297 11:25:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@930 -- # force=-F 00:06:41.297 11:25:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@935 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:06:41.297 mke2fs 1.46.5 (30-Dec-2021) 00:06:41.297 Discarding device blocks: 0/522240 done 00:06:41.297 Creating filesystem with 522240 1k blocks and 130560 inodes 00:06:41.297 Filesystem UUID: 489b360a-662c-4011-ad59-cca07b1c7118 00:06:41.297 Superblock backups stored on blocks: 00:06:41.297 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:06:41.297 00:06:41.297 Allocating group tables: 0/64 done 00:06:41.297 Writing inode tables: 0/64 done 00:06:41.554 Creating journal (8192 blocks): done 00:06:41.554 Writing superblocks and filesystem accounting information: 0/64 done 00:06:41.554 00:06:41.554 11:25:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@943 -- # return 0 00:06:41.554 11:25:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:06:41.554 11:25:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:06:41.554 11:25:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:06:41.554 11:25:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:06:41.554 11:25:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:06:41.554 11:25:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:06:41.554 11:25:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:06:41.554 11:25:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 65716 00:06:41.554 11:25:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:06:41.554 11:25:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:06:41.554 11:25:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:06:41.554 11:25:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:06:41.554 00:06:41.554 real 0m0.339s 00:06:41.554 user 0m0.015s 00:06:41.554 sys 0m0.050s 00:06:41.554 11:25:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:41.554 11:25:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:06:41.554 ************************************ 00:06:41.554 END TEST filesystem_in_capsule_ext4 00:06:41.554 ************************************ 00:06:41.812 11:25:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:06:41.812 11:25:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:06:41.812 11:25:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:06:41.812 11:25:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:41.812 11:25:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:41.812 ************************************ 00:06:41.812 START TEST filesystem_in_capsule_btrfs 00:06:41.812 ************************************ 00:06:41.812 11:25:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create btrfs nvme0n1 00:06:41.812 11:25:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:06:41.812 11:25:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:06:41.812 11:25:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:06:41.812 11:25:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@924 -- # local fstype=btrfs 00:06:41.812 11:25:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:06:41.812 11:25:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@926 -- # local i=0 00:06:41.812 11:25:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@927 -- # local force 00:06:41.812 11:25:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@929 -- # '[' btrfs = ext4 ']' 00:06:41.812 11:25:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@932 -- # force=-f 00:06:41.812 11:25:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@935 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:06:41.812 btrfs-progs v6.6.2 00:06:41.812 See https://btrfs.readthedocs.io for more information. 00:06:41.812 00:06:41.812 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:06:41.812 NOTE: several default settings have changed in version 5.15, please make sure 00:06:41.812 this does not affect your deployments: 00:06:41.812 - DUP for metadata (-m dup) 00:06:41.812 - enabled no-holes (-O no-holes) 00:06:41.812 - enabled free-space-tree (-R free-space-tree) 00:06:41.812 00:06:41.812 Label: (null) 00:06:41.812 UUID: dca9ee89-5ef0-41ad-8606-7b4cbf23cee5 00:06:41.812 Node size: 16384 00:06:41.812 Sector size: 4096 00:06:41.812 Filesystem size: 510.00MiB 00:06:41.812 Block group profiles: 00:06:41.812 Data: single 8.00MiB 00:06:41.812 Metadata: DUP 32.00MiB 00:06:41.812 System: DUP 8.00MiB 00:06:41.812 SSD detected: yes 00:06:41.812 Zoned device: no 00:06:41.812 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:06:41.812 Runtime features: free-space-tree 00:06:41.812 Checksum: crc32c 00:06:41.812 Number of devices: 1 00:06:41.812 Devices: 00:06:41.812 ID SIZE PATH 00:06:41.812 1 510.00MiB /dev/nvme0n1p1 00:06:41.812 00:06:41.812 11:25:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@943 -- # return 0 00:06:41.812 11:25:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:06:41.812 11:25:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:06:41.812 11:25:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:06:41.812 11:25:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:06:41.812 11:25:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:06:41.812 11:25:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:06:41.812 11:25:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:06:41.812 11:25:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 65716 00:06:41.812 11:25:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:06:41.812 11:25:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:06:41.812 11:25:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:06:41.812 11:25:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:06:41.812 00:06:41.812 real 0m0.163s 00:06:41.812 user 0m0.014s 00:06:41.812 sys 0m0.063s 00:06:41.812 11:25:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:41.812 11:25:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:06:41.812 ************************************ 00:06:41.812 END TEST filesystem_in_capsule_btrfs 00:06:41.812 ************************************ 00:06:41.812 11:25:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:06:41.812 11:25:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:06:41.812 11:25:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:06:41.812 11:25:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:41.812 11:25:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:41.812 ************************************ 00:06:41.812 START TEST filesystem_in_capsule_xfs 00:06:41.812 ************************************ 00:06:41.812 11:25:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create xfs nvme0n1 00:06:41.812 11:25:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:06:41.812 11:25:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:06:41.812 11:25:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:06:41.812 11:25:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@924 -- # local fstype=xfs 00:06:41.812 11:25:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:06:41.812 11:25:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@926 -- # local i=0 00:06:41.812 11:25:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@927 -- # local force 00:06:41.812 11:25:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@929 -- # '[' xfs = ext4 ']' 00:06:41.812 11:25:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@932 -- # force=-f 00:06:41.812 11:25:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@935 -- # mkfs.xfs -f /dev/nvme0n1p1 00:06:42.070 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:06:42.070 = sectsz=512 attr=2, projid32bit=1 00:06:42.070 = crc=1 finobt=1, sparse=1, rmapbt=0 00:06:42.070 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:06:42.070 data = bsize=4096 blocks=130560, imaxpct=25 00:06:42.070 = sunit=0 swidth=0 blks 00:06:42.070 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:06:42.070 log =internal log bsize=4096 blocks=16384, version=2 00:06:42.070 = sectsz=512 sunit=0 blks, lazy-count=1 00:06:42.070 realtime =none extsz=4096 blocks=0, rtextents=0 00:06:42.633 Discarding blocks...Done. 00:06:42.633 11:25:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@943 -- # return 0 00:06:42.633 11:25:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:06:44.526 11:25:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:06:44.526 11:25:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:06:44.526 11:25:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:06:44.526 11:25:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:06:44.526 11:25:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:06:44.526 11:25:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:06:44.526 11:25:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 65716 00:06:44.526 11:25:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:06:44.526 11:25:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:06:44.526 11:25:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:06:44.526 11:25:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:06:44.526 00:06:44.526 real 0m2.555s 00:06:44.526 user 0m0.017s 00:06:44.526 sys 0m0.050s 00:06:44.526 11:25:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:44.526 11:25:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:06:44.526 ************************************ 00:06:44.526 END TEST filesystem_in_capsule_xfs 00:06:44.526 ************************************ 00:06:44.526 11:25:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:06:44.526 11:25:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:06:44.526 11:25:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:06:44.526 11:25:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:06:44.526 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:06:44.526 11:25:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:06:44.526 11:25:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1219 -- # local i=0 00:06:44.526 11:25:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:06:44.526 11:25:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:06:44.526 11:25:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:06:44.526 11:25:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:06:44.526 11:25:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # return 0 00:06:44.526 11:25:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:06:44.526 11:25:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:44.526 11:25:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:44.526 11:25:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:44.526 11:25:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:06:44.526 11:25:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 65716 00:06:44.526 11:25:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@948 -- # '[' -z 65716 ']' 00:06:44.526 11:25:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@952 -- # kill -0 65716 00:06:44.526 11:25:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@953 -- # uname 00:06:44.526 11:25:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:44.526 11:25:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 65716 00:06:44.526 11:25:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:44.526 11:25:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:44.526 killing process with pid 65716 00:06:44.526 11:25:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@966 -- # echo 'killing process with pid 65716' 00:06:44.526 11:25:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@967 -- # kill 65716 00:06:44.526 11:25:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@972 -- # wait 65716 00:06:44.783 11:25:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:06:44.783 00:06:44.783 real 0m8.199s 00:06:44.783 user 0m30.835s 00:06:44.783 sys 0m1.537s 00:06:44.783 11:25:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:44.783 11:25:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:44.783 ************************************ 00:06:44.784 END TEST nvmf_filesystem_in_capsule 00:06:44.784 ************************************ 00:06:45.066 11:25:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1142 -- # return 0 00:06:45.066 11:25:22 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:06:45.066 11:25:22 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:06:45.066 11:25:22 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@117 -- # sync 00:06:45.066 11:25:22 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:06:45.066 11:25:22 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@120 -- # set +e 00:06:45.066 11:25:22 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:06:45.066 11:25:22 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:06:45.066 rmmod nvme_tcp 00:06:45.066 rmmod nvme_fabrics 00:06:45.066 rmmod nvme_keyring 00:06:45.066 11:25:22 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:06:45.066 11:25:22 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@124 -- # set -e 00:06:45.066 11:25:22 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@125 -- # return 0 00:06:45.066 11:25:22 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:06:45.066 11:25:22 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:06:45.066 11:25:22 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:06:45.066 11:25:22 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:06:45.066 11:25:22 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:06:45.066 11:25:22 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@278 -- # remove_spdk_ns 00:06:45.066 11:25:22 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:45.066 11:25:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:45.066 11:25:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:45.066 11:25:22 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:06:45.066 00:06:45.066 real 0m17.391s 00:06:45.066 user 1m2.422s 00:06:45.066 sys 0m3.372s 00:06:45.066 11:25:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:45.066 11:25:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:06:45.066 ************************************ 00:06:45.066 END TEST nvmf_filesystem 00:06:45.066 ************************************ 00:06:45.066 11:25:22 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:06:45.066 11:25:22 nvmf_tcp -- nvmf/nvmf.sh@25 -- # run_test nvmf_target_discovery /home/vagrant/spdk_repo/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:06:45.066 11:25:22 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:06:45.066 11:25:22 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:45.066 11:25:22 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:45.066 ************************************ 00:06:45.066 START TEST nvmf_target_discovery 00:06:45.066 ************************************ 00:06:45.066 11:25:22 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:06:45.066 * Looking for test storage... 00:06:45.066 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:06:45.066 11:25:22 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:06:45.066 11:25:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:06:45.066 11:25:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:45.066 11:25:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:45.066 11:25:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:45.066 11:25:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:45.066 11:25:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:45.066 11:25:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:45.066 11:25:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:45.066 11:25:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:45.066 11:25:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:45.066 11:25:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:45.066 11:25:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:891080d4-f96c-4735-b9e2-e3ce9892e421 00:06:45.066 11:25:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=891080d4-f96c-4735-b9e2-e3ce9892e421 00:06:45.066 11:25:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:45.066 11:25:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:45.066 11:25:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:06:45.066 11:25:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:45.066 11:25:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:45.066 11:25:22 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:45.066 11:25:22 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:45.066 11:25:22 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:45.066 11:25:22 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:45.066 11:25:22 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:45.066 11:25:22 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:45.066 11:25:22 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:06:45.066 11:25:22 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:45.066 11:25:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@47 -- # : 0 00:06:45.066 11:25:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:45.066 11:25:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:45.066 11:25:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:45.066 11:25:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:45.066 11:25:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:45.066 11:25:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:45.066 11:25:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:45.066 11:25:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:45.066 11:25:22 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:06:45.066 11:25:22 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:06:45.066 11:25:22 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:06:45.066 11:25:22 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:06:45.066 11:25:22 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:06:45.066 11:25:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:06:45.066 11:25:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:45.066 11:25:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:06:45.066 11:25:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:06:45.066 11:25:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:06:45.066 11:25:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:45.066 11:25:22 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:45.066 11:25:22 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:45.066 11:25:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:06:45.066 11:25:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:06:45.066 11:25:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:06:45.066 11:25:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:06:45.066 11:25:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:06:45.066 11:25:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@432 -- # nvmf_veth_init 00:06:45.066 11:25:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:45.066 11:25:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:45.066 11:25:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:06:45.066 11:25:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:06:45.066 11:25:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:06:45.066 11:25:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:06:45.066 11:25:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:06:45.066 11:25:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:45.066 11:25:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:06:45.066 11:25:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:06:45.066 11:25:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:06:45.066 11:25:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:06:45.066 11:25:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:06:45.324 11:25:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:06:45.324 Cannot find device "nvmf_tgt_br" 00:06:45.324 11:25:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@155 -- # true 00:06:45.324 11:25:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:06:45.324 Cannot find device "nvmf_tgt_br2" 00:06:45.324 11:25:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@156 -- # true 00:06:45.324 11:25:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:06:45.324 11:25:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:06:45.324 Cannot find device "nvmf_tgt_br" 00:06:45.325 11:25:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@158 -- # true 00:06:45.325 11:25:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:06:45.325 Cannot find device "nvmf_tgt_br2" 00:06:45.325 11:25:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@159 -- # true 00:06:45.325 11:25:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:06:45.325 11:25:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:06:45.325 11:25:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:06:45.325 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:06:45.325 11:25:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@162 -- # true 00:06:45.325 11:25:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:06:45.325 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:06:45.325 11:25:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@163 -- # true 00:06:45.325 11:25:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:06:45.325 11:25:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:06:45.325 11:25:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:06:45.325 11:25:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:06:45.325 11:25:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:06:45.325 11:25:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:06:45.325 11:25:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:06:45.325 11:25:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:06:45.325 11:25:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:06:45.325 11:25:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:06:45.325 11:25:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:06:45.325 11:25:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:06:45.325 11:25:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:06:45.325 11:25:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:06:45.325 11:25:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:06:45.325 11:25:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:06:45.325 11:25:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:06:45.325 11:25:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:06:45.583 11:25:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:06:45.583 11:25:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:06:45.583 11:25:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:06:45.583 11:25:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:06:45.583 11:25:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:06:45.583 11:25:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:06:45.583 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:45.583 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.104 ms 00:06:45.583 00:06:45.583 --- 10.0.0.2 ping statistics --- 00:06:45.583 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:45.583 rtt min/avg/max/mdev = 0.104/0.104/0.104/0.000 ms 00:06:45.583 11:25:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:06:45.583 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:06:45.583 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.034 ms 00:06:45.583 00:06:45.583 --- 10.0.0.3 ping statistics --- 00:06:45.583 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:45.583 rtt min/avg/max/mdev = 0.034/0.034/0.034/0.000 ms 00:06:45.583 11:25:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:06:45.583 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:45.583 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.022 ms 00:06:45.583 00:06:45.583 --- 10.0.0.1 ping statistics --- 00:06:45.583 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:45.583 rtt min/avg/max/mdev = 0.022/0.022/0.022/0.000 ms 00:06:45.583 11:25:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:45.583 11:25:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@433 -- # return 0 00:06:45.583 11:25:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:06:45.583 11:25:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:45.583 11:25:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:06:45.583 11:25:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:06:45.583 11:25:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:45.583 11:25:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:06:45.583 11:25:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:06:45.583 11:25:22 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:06:45.583 11:25:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:06:45.583 11:25:22 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:45.583 11:25:22 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:45.583 11:25:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@481 -- # nvmfpid=66159 00:06:45.583 11:25:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:06:45.583 11:25:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@482 -- # waitforlisten 66159 00:06:45.583 11:25:22 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@829 -- # '[' -z 66159 ']' 00:06:45.583 11:25:22 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:45.583 11:25:22 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:45.583 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:45.583 11:25:22 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:45.583 11:25:22 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:45.583 11:25:22 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:45.583 [2024-07-15 11:25:22.958621] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:06:45.583 [2024-07-15 11:25:22.958716] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:45.841 [2024-07-15 11:25:23.092295] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:45.841 [2024-07-15 11:25:23.171852] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:45.841 [2024-07-15 11:25:23.171903] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:45.841 [2024-07-15 11:25:23.171915] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:45.841 [2024-07-15 11:25:23.171924] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:45.841 [2024-07-15 11:25:23.171931] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:45.841 [2024-07-15 11:25:23.172022] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:45.841 [2024-07-15 11:25:23.172064] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:45.841 [2024-07-15 11:25:23.172797] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:45.841 [2024-07-15 11:25:23.172803] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:46.774 11:25:24 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:46.774 11:25:24 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@862 -- # return 0 00:06:46.774 11:25:24 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:06:46.774 11:25:24 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:46.774 11:25:24 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:46.774 11:25:24 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:46.774 11:25:24 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:06:46.774 11:25:24 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:46.774 11:25:24 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:46.774 [2024-07-15 11:25:24.047440] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:46.774 11:25:24 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:46.774 11:25:24 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:06:46.774 11:25:24 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:06:46.774 11:25:24 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:06:46.774 11:25:24 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:46.774 11:25:24 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:46.774 Null1 00:06:46.774 11:25:24 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:46.774 11:25:24 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:06:46.774 11:25:24 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:46.774 11:25:24 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:46.774 11:25:24 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:46.774 11:25:24 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:06:46.774 11:25:24 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:46.774 11:25:24 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:46.775 11:25:24 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:46.775 11:25:24 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:46.775 11:25:24 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:46.775 11:25:24 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:46.775 [2024-07-15 11:25:24.107313] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:46.775 11:25:24 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:46.775 11:25:24 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:06:46.775 11:25:24 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:06:46.775 11:25:24 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:46.775 11:25:24 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:46.775 Null2 00:06:46.775 11:25:24 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:46.775 11:25:24 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:06:46.775 11:25:24 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:46.775 11:25:24 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:46.775 11:25:24 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:46.775 11:25:24 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:06:46.775 11:25:24 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:46.775 11:25:24 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:46.775 11:25:24 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:46.775 11:25:24 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:06:46.775 11:25:24 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:46.775 11:25:24 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:46.775 11:25:24 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:46.775 11:25:24 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:06:46.775 11:25:24 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:06:46.775 11:25:24 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:46.775 11:25:24 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:46.775 Null3 00:06:46.775 11:25:24 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:46.775 11:25:24 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:06:46.775 11:25:24 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:46.775 11:25:24 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:46.775 11:25:24 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:46.775 11:25:24 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:06:46.775 11:25:24 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:46.775 11:25:24 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:46.775 11:25:24 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:46.775 11:25:24 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:06:46.775 11:25:24 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:46.775 11:25:24 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:46.775 11:25:24 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:46.775 11:25:24 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:06:46.775 11:25:24 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:06:46.775 11:25:24 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:46.775 11:25:24 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:46.775 Null4 00:06:46.775 11:25:24 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:46.775 11:25:24 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:06:46.775 11:25:24 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:46.775 11:25:24 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:46.775 11:25:24 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:46.775 11:25:24 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:06:46.775 11:25:24 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:46.775 11:25:24 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:46.775 11:25:24 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:46.775 11:25:24 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:06:46.775 11:25:24 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:46.775 11:25:24 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:46.775 11:25:24 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:46.775 11:25:24 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:06:46.775 11:25:24 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:46.775 11:25:24 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:46.775 11:25:24 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:46.775 11:25:24 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:06:46.775 11:25:24 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:46.775 11:25:24 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:46.775 11:25:24 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:46.775 11:25:24 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:891080d4-f96c-4735-b9e2-e3ce9892e421 --hostid=891080d4-f96c-4735-b9e2-e3ce9892e421 -t tcp -a 10.0.0.2 -s 4420 00:06:47.033 00:06:47.033 Discovery Log Number of Records 6, Generation counter 6 00:06:47.033 =====Discovery Log Entry 0====== 00:06:47.033 trtype: tcp 00:06:47.033 adrfam: ipv4 00:06:47.033 subtype: current discovery subsystem 00:06:47.033 treq: not required 00:06:47.033 portid: 0 00:06:47.033 trsvcid: 4420 00:06:47.033 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:06:47.033 traddr: 10.0.0.2 00:06:47.033 eflags: explicit discovery connections, duplicate discovery information 00:06:47.033 sectype: none 00:06:47.033 =====Discovery Log Entry 1====== 00:06:47.033 trtype: tcp 00:06:47.033 adrfam: ipv4 00:06:47.033 subtype: nvme subsystem 00:06:47.033 treq: not required 00:06:47.033 portid: 0 00:06:47.033 trsvcid: 4420 00:06:47.033 subnqn: nqn.2016-06.io.spdk:cnode1 00:06:47.033 traddr: 10.0.0.2 00:06:47.033 eflags: none 00:06:47.033 sectype: none 00:06:47.033 =====Discovery Log Entry 2====== 00:06:47.033 trtype: tcp 00:06:47.033 adrfam: ipv4 00:06:47.033 subtype: nvme subsystem 00:06:47.033 treq: not required 00:06:47.033 portid: 0 00:06:47.033 trsvcid: 4420 00:06:47.033 subnqn: nqn.2016-06.io.spdk:cnode2 00:06:47.033 traddr: 10.0.0.2 00:06:47.033 eflags: none 00:06:47.033 sectype: none 00:06:47.033 =====Discovery Log Entry 3====== 00:06:47.033 trtype: tcp 00:06:47.033 adrfam: ipv4 00:06:47.033 subtype: nvme subsystem 00:06:47.033 treq: not required 00:06:47.033 portid: 0 00:06:47.033 trsvcid: 4420 00:06:47.033 subnqn: nqn.2016-06.io.spdk:cnode3 00:06:47.033 traddr: 10.0.0.2 00:06:47.033 eflags: none 00:06:47.033 sectype: none 00:06:47.033 =====Discovery Log Entry 4====== 00:06:47.033 trtype: tcp 00:06:47.033 adrfam: ipv4 00:06:47.033 subtype: nvme subsystem 00:06:47.033 treq: not required 00:06:47.033 portid: 0 00:06:47.033 trsvcid: 4420 00:06:47.033 subnqn: nqn.2016-06.io.spdk:cnode4 00:06:47.033 traddr: 10.0.0.2 00:06:47.034 eflags: none 00:06:47.034 sectype: none 00:06:47.034 =====Discovery Log Entry 5====== 00:06:47.034 trtype: tcp 00:06:47.034 adrfam: ipv4 00:06:47.034 subtype: discovery subsystem referral 00:06:47.034 treq: not required 00:06:47.034 portid: 0 00:06:47.034 trsvcid: 4430 00:06:47.034 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:06:47.034 traddr: 10.0.0.2 00:06:47.034 eflags: none 00:06:47.034 sectype: none 00:06:47.034 Perform nvmf subsystem discovery via RPC 00:06:47.034 11:25:24 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:06:47.034 11:25:24 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:06:47.034 11:25:24 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:47.034 11:25:24 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:47.034 [ 00:06:47.034 { 00:06:47.034 "allow_any_host": true, 00:06:47.034 "hosts": [], 00:06:47.034 "listen_addresses": [ 00:06:47.034 { 00:06:47.034 "adrfam": "IPv4", 00:06:47.034 "traddr": "10.0.0.2", 00:06:47.034 "trsvcid": "4420", 00:06:47.034 "trtype": "TCP" 00:06:47.034 } 00:06:47.034 ], 00:06:47.034 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:06:47.034 "subtype": "Discovery" 00:06:47.034 }, 00:06:47.034 { 00:06:47.034 "allow_any_host": true, 00:06:47.034 "hosts": [], 00:06:47.034 "listen_addresses": [ 00:06:47.034 { 00:06:47.034 "adrfam": "IPv4", 00:06:47.034 "traddr": "10.0.0.2", 00:06:47.034 "trsvcid": "4420", 00:06:47.034 "trtype": "TCP" 00:06:47.034 } 00:06:47.034 ], 00:06:47.034 "max_cntlid": 65519, 00:06:47.034 "max_namespaces": 32, 00:06:47.034 "min_cntlid": 1, 00:06:47.034 "model_number": "SPDK bdev Controller", 00:06:47.034 "namespaces": [ 00:06:47.034 { 00:06:47.034 "bdev_name": "Null1", 00:06:47.034 "name": "Null1", 00:06:47.034 "nguid": "3D2EF4150DBE4EEDABC4A40EA1652C1F", 00:06:47.034 "nsid": 1, 00:06:47.034 "uuid": "3d2ef415-0dbe-4eed-abc4-a40ea1652c1f" 00:06:47.034 } 00:06:47.034 ], 00:06:47.034 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:06:47.034 "serial_number": "SPDK00000000000001", 00:06:47.034 "subtype": "NVMe" 00:06:47.034 }, 00:06:47.034 { 00:06:47.034 "allow_any_host": true, 00:06:47.034 "hosts": [], 00:06:47.034 "listen_addresses": [ 00:06:47.034 { 00:06:47.034 "adrfam": "IPv4", 00:06:47.034 "traddr": "10.0.0.2", 00:06:47.034 "trsvcid": "4420", 00:06:47.034 "trtype": "TCP" 00:06:47.034 } 00:06:47.034 ], 00:06:47.034 "max_cntlid": 65519, 00:06:47.034 "max_namespaces": 32, 00:06:47.034 "min_cntlid": 1, 00:06:47.034 "model_number": "SPDK bdev Controller", 00:06:47.034 "namespaces": [ 00:06:47.034 { 00:06:47.034 "bdev_name": "Null2", 00:06:47.034 "name": "Null2", 00:06:47.034 "nguid": "BB9139F3DF0E44C0B6A6ABAF4D4B0C1B", 00:06:47.034 "nsid": 1, 00:06:47.034 "uuid": "bb9139f3-df0e-44c0-b6a6-abaf4d4b0c1b" 00:06:47.034 } 00:06:47.034 ], 00:06:47.034 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:06:47.034 "serial_number": "SPDK00000000000002", 00:06:47.034 "subtype": "NVMe" 00:06:47.034 }, 00:06:47.034 { 00:06:47.034 "allow_any_host": true, 00:06:47.034 "hosts": [], 00:06:47.034 "listen_addresses": [ 00:06:47.034 { 00:06:47.034 "adrfam": "IPv4", 00:06:47.034 "traddr": "10.0.0.2", 00:06:47.034 "trsvcid": "4420", 00:06:47.034 "trtype": "TCP" 00:06:47.034 } 00:06:47.034 ], 00:06:47.034 "max_cntlid": 65519, 00:06:47.034 "max_namespaces": 32, 00:06:47.034 "min_cntlid": 1, 00:06:47.034 "model_number": "SPDK bdev Controller", 00:06:47.034 "namespaces": [ 00:06:47.034 { 00:06:47.034 "bdev_name": "Null3", 00:06:47.034 "name": "Null3", 00:06:47.034 "nguid": "8645080E47864FD48295F5BB9C897DB9", 00:06:47.034 "nsid": 1, 00:06:47.034 "uuid": "8645080e-4786-4fd4-8295-f5bb9c897db9" 00:06:47.034 } 00:06:47.034 ], 00:06:47.034 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:06:47.034 "serial_number": "SPDK00000000000003", 00:06:47.034 "subtype": "NVMe" 00:06:47.034 }, 00:06:47.034 { 00:06:47.034 "allow_any_host": true, 00:06:47.034 "hosts": [], 00:06:47.034 "listen_addresses": [ 00:06:47.034 { 00:06:47.034 "adrfam": "IPv4", 00:06:47.034 "traddr": "10.0.0.2", 00:06:47.034 "trsvcid": "4420", 00:06:47.034 "trtype": "TCP" 00:06:47.034 } 00:06:47.034 ], 00:06:47.034 "max_cntlid": 65519, 00:06:47.034 "max_namespaces": 32, 00:06:47.034 "min_cntlid": 1, 00:06:47.034 "model_number": "SPDK bdev Controller", 00:06:47.034 "namespaces": [ 00:06:47.034 { 00:06:47.034 "bdev_name": "Null4", 00:06:47.034 "name": "Null4", 00:06:47.034 "nguid": "DDAA273879A448AFAF9705F4BA9745CE", 00:06:47.034 "nsid": 1, 00:06:47.034 "uuid": "ddaa2738-79a4-48af-af97-05f4ba9745ce" 00:06:47.034 } 00:06:47.034 ], 00:06:47.034 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:06:47.034 "serial_number": "SPDK00000000000004", 00:06:47.034 "subtype": "NVMe" 00:06:47.034 } 00:06:47.034 ] 00:06:47.034 11:25:24 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:47.034 11:25:24 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:06:47.034 11:25:24 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:06:47.034 11:25:24 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:06:47.034 11:25:24 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:47.034 11:25:24 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:47.034 11:25:24 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:47.034 11:25:24 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:06:47.034 11:25:24 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:47.034 11:25:24 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:47.034 11:25:24 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:47.034 11:25:24 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:06:47.034 11:25:24 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:06:47.034 11:25:24 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:47.034 11:25:24 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:47.034 11:25:24 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:47.034 11:25:24 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:06:47.034 11:25:24 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:47.034 11:25:24 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:47.034 11:25:24 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:47.034 11:25:24 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:06:47.034 11:25:24 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:06:47.034 11:25:24 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:47.034 11:25:24 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:47.034 11:25:24 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:47.034 11:25:24 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:06:47.034 11:25:24 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:47.034 11:25:24 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:47.034 11:25:24 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:47.034 11:25:24 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:06:47.034 11:25:24 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:06:47.034 11:25:24 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:47.034 11:25:24 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:47.034 11:25:24 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:47.034 11:25:24 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:06:47.034 11:25:24 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:47.034 11:25:24 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:47.034 11:25:24 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:47.034 11:25:24 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:06:47.034 11:25:24 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:47.034 11:25:24 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:47.034 11:25:24 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:47.034 11:25:24 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:06:47.034 11:25:24 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:06:47.034 11:25:24 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:47.034 11:25:24 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:47.034 11:25:24 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:47.034 11:25:24 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:06:47.034 11:25:24 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:06:47.034 11:25:24 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:06:47.034 11:25:24 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:06:47.034 11:25:24 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:06:47.034 11:25:24 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@117 -- # sync 00:06:47.034 11:25:24 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:06:47.035 11:25:24 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@120 -- # set +e 00:06:47.035 11:25:24 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:06:47.035 11:25:24 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:06:47.035 rmmod nvme_tcp 00:06:47.035 rmmod nvme_fabrics 00:06:47.035 rmmod nvme_keyring 00:06:47.292 11:25:24 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:06:47.292 11:25:24 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@124 -- # set -e 00:06:47.292 11:25:24 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@125 -- # return 0 00:06:47.292 11:25:24 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@489 -- # '[' -n 66159 ']' 00:06:47.292 11:25:24 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@490 -- # killprocess 66159 00:06:47.292 11:25:24 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@948 -- # '[' -z 66159 ']' 00:06:47.292 11:25:24 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@952 -- # kill -0 66159 00:06:47.292 11:25:24 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@953 -- # uname 00:06:47.292 11:25:24 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:47.292 11:25:24 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 66159 00:06:47.292 11:25:24 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:47.292 11:25:24 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:47.292 killing process with pid 66159 00:06:47.292 11:25:24 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@966 -- # echo 'killing process with pid 66159' 00:06:47.292 11:25:24 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@967 -- # kill 66159 00:06:47.292 11:25:24 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@972 -- # wait 66159 00:06:47.292 11:25:24 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:06:47.292 11:25:24 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:06:47.292 11:25:24 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:06:47.292 11:25:24 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:06:47.292 11:25:24 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:06:47.292 11:25:24 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:47.292 11:25:24 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:47.292 11:25:24 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:47.292 11:25:24 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:06:47.292 00:06:47.292 real 0m2.314s 00:06:47.292 user 0m6.504s 00:06:47.292 sys 0m0.540s 00:06:47.292 11:25:24 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:47.292 11:25:24 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:47.292 ************************************ 00:06:47.292 END TEST nvmf_target_discovery 00:06:47.292 ************************************ 00:06:47.550 11:25:24 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:06:47.550 11:25:24 nvmf_tcp -- nvmf/nvmf.sh@26 -- # run_test nvmf_referrals /home/vagrant/spdk_repo/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:06:47.550 11:25:24 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:06:47.550 11:25:24 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:47.550 11:25:24 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:47.550 ************************************ 00:06:47.550 START TEST nvmf_referrals 00:06:47.550 ************************************ 00:06:47.550 11:25:24 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:06:47.550 * Looking for test storage... 00:06:47.550 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:06:47.550 11:25:24 nvmf_tcp.nvmf_referrals -- target/referrals.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:06:47.550 11:25:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:06:47.550 11:25:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:47.550 11:25:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:47.550 11:25:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:47.550 11:25:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:47.550 11:25:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:47.550 11:25:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:47.550 11:25:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:47.550 11:25:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:47.550 11:25:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:47.550 11:25:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:47.550 11:25:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:891080d4-f96c-4735-b9e2-e3ce9892e421 00:06:47.550 11:25:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=891080d4-f96c-4735-b9e2-e3ce9892e421 00:06:47.550 11:25:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:47.550 11:25:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:47.550 11:25:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:06:47.550 11:25:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:47.550 11:25:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:47.550 11:25:24 nvmf_tcp.nvmf_referrals -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:47.550 11:25:24 nvmf_tcp.nvmf_referrals -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:47.550 11:25:24 nvmf_tcp.nvmf_referrals -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:47.550 11:25:24 nvmf_tcp.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:47.550 11:25:24 nvmf_tcp.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:47.550 11:25:24 nvmf_tcp.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:47.550 11:25:24 nvmf_tcp.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:06:47.550 11:25:24 nvmf_tcp.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:47.550 11:25:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@47 -- # : 0 00:06:47.550 11:25:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:47.550 11:25:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:47.550 11:25:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:47.550 11:25:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:47.550 11:25:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:47.550 11:25:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:47.550 11:25:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:47.550 11:25:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:47.550 11:25:24 nvmf_tcp.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:06:47.550 11:25:24 nvmf_tcp.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:06:47.550 11:25:24 nvmf_tcp.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:06:47.550 11:25:24 nvmf_tcp.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:06:47.550 11:25:24 nvmf_tcp.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:06:47.550 11:25:24 nvmf_tcp.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:06:47.550 11:25:24 nvmf_tcp.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:06:47.550 11:25:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:06:47.551 11:25:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:47.551 11:25:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@448 -- # prepare_net_devs 00:06:47.551 11:25:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@410 -- # local -g is_hw=no 00:06:47.551 11:25:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@412 -- # remove_spdk_ns 00:06:47.551 11:25:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:47.551 11:25:24 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:47.551 11:25:24 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:47.551 11:25:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:06:47.551 11:25:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:06:47.551 11:25:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:06:47.551 11:25:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:06:47.551 11:25:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:06:47.551 11:25:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@432 -- # nvmf_veth_init 00:06:47.551 11:25:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:47.551 11:25:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:47.551 11:25:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:06:47.551 11:25:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:06:47.551 11:25:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:06:47.551 11:25:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:06:47.551 11:25:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:06:47.551 11:25:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:47.551 11:25:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:06:47.551 11:25:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:06:47.551 11:25:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:06:47.551 11:25:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:06:47.551 11:25:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:06:47.551 11:25:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:06:47.551 Cannot find device "nvmf_tgt_br" 00:06:47.551 11:25:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@155 -- # true 00:06:47.551 11:25:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:06:47.551 Cannot find device "nvmf_tgt_br2" 00:06:47.551 11:25:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@156 -- # true 00:06:47.551 11:25:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:06:47.551 11:25:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:06:47.551 Cannot find device "nvmf_tgt_br" 00:06:47.551 11:25:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@158 -- # true 00:06:47.551 11:25:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:06:47.551 Cannot find device "nvmf_tgt_br2" 00:06:47.551 11:25:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@159 -- # true 00:06:47.551 11:25:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:06:47.551 11:25:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:06:47.551 11:25:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:06:47.551 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:06:47.551 11:25:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@162 -- # true 00:06:47.551 11:25:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:06:47.551 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:06:47.551 11:25:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@163 -- # true 00:06:47.551 11:25:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:06:47.551 11:25:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:06:47.551 11:25:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:06:47.551 11:25:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:06:47.551 11:25:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:06:47.809 11:25:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:06:47.809 11:25:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:06:47.809 11:25:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:06:47.809 11:25:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:06:47.809 11:25:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:06:47.809 11:25:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:06:47.809 11:25:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:06:47.809 11:25:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:06:47.809 11:25:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:06:47.809 11:25:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:06:47.809 11:25:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:06:47.809 11:25:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:06:47.809 11:25:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:06:47.809 11:25:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:06:47.809 11:25:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:06:47.809 11:25:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:06:47.809 11:25:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:06:47.809 11:25:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:06:47.809 11:25:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:06:47.809 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:47.809 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.092 ms 00:06:47.809 00:06:47.809 --- 10.0.0.2 ping statistics --- 00:06:47.809 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:47.809 rtt min/avg/max/mdev = 0.092/0.092/0.092/0.000 ms 00:06:47.809 11:25:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:06:47.809 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:06:47.809 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.060 ms 00:06:47.809 00:06:47.809 --- 10.0.0.3 ping statistics --- 00:06:47.809 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:47.809 rtt min/avg/max/mdev = 0.060/0.060/0.060/0.000 ms 00:06:47.809 11:25:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:06:47.809 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:47.809 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.037 ms 00:06:47.809 00:06:47.809 --- 10.0.0.1 ping statistics --- 00:06:47.809 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:47.809 rtt min/avg/max/mdev = 0.037/0.037/0.037/0.000 ms 00:06:47.809 11:25:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:47.809 11:25:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@433 -- # return 0 00:06:47.809 11:25:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:06:47.809 11:25:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:47.810 11:25:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:06:47.810 11:25:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:06:47.810 11:25:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:47.810 11:25:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:06:47.810 11:25:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:06:47.810 11:25:25 nvmf_tcp.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:06:47.810 11:25:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:06:47.810 11:25:25 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:47.810 11:25:25 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:47.810 11:25:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@481 -- # nvmfpid=66395 00:06:47.810 11:25:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:06:47.810 11:25:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@482 -- # waitforlisten 66395 00:06:47.810 11:25:25 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@829 -- # '[' -z 66395 ']' 00:06:47.810 11:25:25 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:47.810 11:25:25 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:47.810 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:47.810 11:25:25 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:47.810 11:25:25 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:47.810 11:25:25 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:48.069 [2024-07-15 11:25:25.293040] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:06:48.069 [2024-07-15 11:25:25.293186] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:48.069 [2024-07-15 11:25:25.438230] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:48.069 [2024-07-15 11:25:25.520282] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:48.069 [2024-07-15 11:25:25.520342] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:48.069 [2024-07-15 11:25:25.520354] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:48.069 [2024-07-15 11:25:25.520362] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:48.069 [2024-07-15 11:25:25.520369] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:48.069 [2024-07-15 11:25:25.520541] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:48.069 [2024-07-15 11:25:25.520622] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:48.069 [2024-07-15 11:25:25.521101] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:48.069 [2024-07-15 11:25:25.521108] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:49.001 11:25:26 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:49.001 11:25:26 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@862 -- # return 0 00:06:49.001 11:25:26 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:06:49.002 11:25:26 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:49.002 11:25:26 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:49.002 11:25:26 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:49.002 11:25:26 nvmf_tcp.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:06:49.002 11:25:26 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:49.002 11:25:26 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:49.002 [2024-07-15 11:25:26.448664] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:49.002 11:25:26 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:49.002 11:25:26 nvmf_tcp.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:06:49.002 11:25:26 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:49.002 11:25:26 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:49.002 [2024-07-15 11:25:26.475084] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:06:49.259 11:25:26 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:49.259 11:25:26 nvmf_tcp.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:06:49.259 11:25:26 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:49.259 11:25:26 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:49.259 11:25:26 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:49.259 11:25:26 nvmf_tcp.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:06:49.259 11:25:26 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:49.259 11:25:26 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:49.259 11:25:26 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:49.259 11:25:26 nvmf_tcp.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:06:49.259 11:25:26 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:49.259 11:25:26 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:49.259 11:25:26 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:49.259 11:25:26 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:06:49.259 11:25:26 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:49.259 11:25:26 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:06:49.259 11:25:26 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:49.259 11:25:26 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:49.259 11:25:26 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:06:49.259 11:25:26 nvmf_tcp.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:06:49.259 11:25:26 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:06:49.259 11:25:26 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:06:49.259 11:25:26 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:49.259 11:25:26 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:06:49.259 11:25:26 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:49.259 11:25:26 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:06:49.259 11:25:26 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:49.259 11:25:26 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:06:49.259 11:25:26 nvmf_tcp.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:06:49.259 11:25:26 nvmf_tcp.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:06:49.259 11:25:26 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:06:49.259 11:25:26 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:06:49.259 11:25:26 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:891080d4-f96c-4735-b9e2-e3ce9892e421 --hostid=891080d4-f96c-4735-b9e2-e3ce9892e421 -t tcp -a 10.0.0.2 -s 8009 -o json 00:06:49.259 11:25:26 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:06:49.259 11:25:26 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:06:49.259 11:25:26 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:06:49.259 11:25:26 nvmf_tcp.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:06:49.259 11:25:26 nvmf_tcp.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:06:49.259 11:25:26 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:49.259 11:25:26 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:49.517 11:25:26 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:49.517 11:25:26 nvmf_tcp.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:06:49.517 11:25:26 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:49.517 11:25:26 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:49.517 11:25:26 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:49.517 11:25:26 nvmf_tcp.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:06:49.517 11:25:26 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:49.517 11:25:26 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:49.517 11:25:26 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:49.517 11:25:26 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:06:49.517 11:25:26 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:49.517 11:25:26 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:49.517 11:25:26 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:06:49.517 11:25:26 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:49.517 11:25:26 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:06:49.517 11:25:26 nvmf_tcp.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:06:49.517 11:25:26 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:06:49.517 11:25:26 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:06:49.517 11:25:26 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:891080d4-f96c-4735-b9e2-e3ce9892e421 --hostid=891080d4-f96c-4735-b9e2-e3ce9892e421 -t tcp -a 10.0.0.2 -s 8009 -o json 00:06:49.517 11:25:26 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:06:49.517 11:25:26 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:06:49.517 11:25:26 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:06:49.517 11:25:26 nvmf_tcp.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:06:49.517 11:25:26 nvmf_tcp.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:06:49.517 11:25:26 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:49.517 11:25:26 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:49.517 11:25:26 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:49.517 11:25:26 nvmf_tcp.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:06:49.517 11:25:26 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:49.517 11:25:26 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:49.517 11:25:26 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:49.517 11:25:26 nvmf_tcp.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:06:49.517 11:25:26 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:06:49.517 11:25:26 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:06:49.517 11:25:26 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:06:49.517 11:25:26 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:49.517 11:25:26 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:49.517 11:25:26 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:06:49.517 11:25:26 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:49.517 11:25:26 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:06:49.517 11:25:26 nvmf_tcp.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:06:49.517 11:25:26 nvmf_tcp.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:06:49.517 11:25:26 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:06:49.517 11:25:26 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:06:49.517 11:25:26 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:891080d4-f96c-4735-b9e2-e3ce9892e421 --hostid=891080d4-f96c-4735-b9e2-e3ce9892e421 -t tcp -a 10.0.0.2 -s 8009 -o json 00:06:49.517 11:25:26 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:06:49.517 11:25:26 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:06:49.775 11:25:27 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:06:49.775 11:25:27 nvmf_tcp.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:06:49.775 11:25:27 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:06:49.775 11:25:27 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:06:49.775 11:25:27 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:06:49.775 11:25:27 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:891080d4-f96c-4735-b9e2-e3ce9892e421 --hostid=891080d4-f96c-4735-b9e2-e3ce9892e421 -t tcp -a 10.0.0.2 -s 8009 -o json 00:06:49.775 11:25:27 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:06:49.775 11:25:27 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:06:49.775 11:25:27 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:06:49.775 11:25:27 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:06:49.775 11:25:27 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:06:49.775 11:25:27 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:891080d4-f96c-4735-b9e2-e3ce9892e421 --hostid=891080d4-f96c-4735-b9e2-e3ce9892e421 -t tcp -a 10.0.0.2 -s 8009 -o json 00:06:49.775 11:25:27 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:06:49.775 11:25:27 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:06:49.775 11:25:27 nvmf_tcp.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:06:49.775 11:25:27 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:49.775 11:25:27 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:49.775 11:25:27 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:49.775 11:25:27 nvmf_tcp.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:06:49.775 11:25:27 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:06:49.775 11:25:27 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:06:49.775 11:25:27 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:49.775 11:25:27 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:49.775 11:25:27 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:06:49.775 11:25:27 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:06:49.775 11:25:27 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:49.775 11:25:27 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:06:49.775 11:25:27 nvmf_tcp.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:06:49.775 11:25:27 nvmf_tcp.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:06:49.775 11:25:27 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:06:49.775 11:25:27 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:06:49.775 11:25:27 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:891080d4-f96c-4735-b9e2-e3ce9892e421 --hostid=891080d4-f96c-4735-b9e2-e3ce9892e421 -t tcp -a 10.0.0.2 -s 8009 -o json 00:06:49.775 11:25:27 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:06:49.775 11:25:27 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:06:50.033 11:25:27 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:06:50.033 11:25:27 nvmf_tcp.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:06:50.033 11:25:27 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:06:50.033 11:25:27 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:06:50.033 11:25:27 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:06:50.033 11:25:27 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:891080d4-f96c-4735-b9e2-e3ce9892e421 --hostid=891080d4-f96c-4735-b9e2-e3ce9892e421 -t tcp -a 10.0.0.2 -s 8009 -o json 00:06:50.033 11:25:27 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:06:50.033 11:25:27 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:06:50.033 11:25:27 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:06:50.033 11:25:27 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:06:50.033 11:25:27 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:06:50.033 11:25:27 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:891080d4-f96c-4735-b9e2-e3ce9892e421 --hostid=891080d4-f96c-4735-b9e2-e3ce9892e421 -t tcp -a 10.0.0.2 -s 8009 -o json 00:06:50.033 11:25:27 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:06:50.033 11:25:27 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:06:50.033 11:25:27 nvmf_tcp.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:06:50.033 11:25:27 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:50.033 11:25:27 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:50.033 11:25:27 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:50.033 11:25:27 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:06:50.033 11:25:27 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:50.033 11:25:27 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:06:50.033 11:25:27 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:50.033 11:25:27 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:50.292 11:25:27 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:06:50.292 11:25:27 nvmf_tcp.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:06:50.292 11:25:27 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:06:50.292 11:25:27 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:06:50.292 11:25:27 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:891080d4-f96c-4735-b9e2-e3ce9892e421 --hostid=891080d4-f96c-4735-b9e2-e3ce9892e421 -t tcp -a 10.0.0.2 -s 8009 -o json 00:06:50.292 11:25:27 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:06:50.292 11:25:27 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:06:50.292 11:25:27 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:06:50.292 11:25:27 nvmf_tcp.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:06:50.292 11:25:27 nvmf_tcp.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:06:50.292 11:25:27 nvmf_tcp.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:06:50.292 11:25:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@488 -- # nvmfcleanup 00:06:50.292 11:25:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@117 -- # sync 00:06:50.292 11:25:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:06:50.292 11:25:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@120 -- # set +e 00:06:50.292 11:25:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@121 -- # for i in {1..20} 00:06:50.292 11:25:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:06:50.292 rmmod nvme_tcp 00:06:50.292 rmmod nvme_fabrics 00:06:50.292 rmmod nvme_keyring 00:06:50.292 11:25:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:06:50.292 11:25:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@124 -- # set -e 00:06:50.292 11:25:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@125 -- # return 0 00:06:50.292 11:25:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@489 -- # '[' -n 66395 ']' 00:06:50.292 11:25:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@490 -- # killprocess 66395 00:06:50.292 11:25:27 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@948 -- # '[' -z 66395 ']' 00:06:50.292 11:25:27 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@952 -- # kill -0 66395 00:06:50.292 11:25:27 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@953 -- # uname 00:06:50.292 11:25:27 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:50.292 11:25:27 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 66395 00:06:50.292 killing process with pid 66395 00:06:50.292 11:25:27 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:50.292 11:25:27 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:50.292 11:25:27 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@966 -- # echo 'killing process with pid 66395' 00:06:50.292 11:25:27 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@967 -- # kill 66395 00:06:50.292 11:25:27 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@972 -- # wait 66395 00:06:50.550 11:25:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:06:50.550 11:25:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:06:50.550 11:25:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:06:50.550 11:25:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:06:50.550 11:25:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@278 -- # remove_spdk_ns 00:06:50.550 11:25:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:50.550 11:25:27 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:50.550 11:25:27 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:50.550 11:25:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:06:50.550 ************************************ 00:06:50.550 END TEST nvmf_referrals 00:06:50.550 ************************************ 00:06:50.550 00:06:50.550 real 0m3.108s 00:06:50.550 user 0m10.566s 00:06:50.550 sys 0m0.754s 00:06:50.550 11:25:27 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:50.550 11:25:27 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:50.550 11:25:27 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:06:50.550 11:25:27 nvmf_tcp -- nvmf/nvmf.sh@27 -- # run_test nvmf_connect_disconnect /home/vagrant/spdk_repo/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:06:50.550 11:25:27 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:06:50.550 11:25:27 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:50.550 11:25:27 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:50.550 ************************************ 00:06:50.550 START TEST nvmf_connect_disconnect 00:06:50.550 ************************************ 00:06:50.550 11:25:27 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:06:50.550 * Looking for test storage... 00:06:50.550 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:06:50.551 11:25:28 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:06:50.551 11:25:28 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:06:50.551 11:25:28 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:50.551 11:25:28 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:50.551 11:25:28 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:50.551 11:25:28 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:50.551 11:25:28 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:50.551 11:25:28 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:50.551 11:25:28 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:50.551 11:25:28 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:50.551 11:25:28 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:50.551 11:25:28 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:50.551 11:25:28 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:891080d4-f96c-4735-b9e2-e3ce9892e421 00:06:50.551 11:25:28 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=891080d4-f96c-4735-b9e2-e3ce9892e421 00:06:50.551 11:25:28 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:50.551 11:25:28 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:50.551 11:25:28 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:06:50.551 11:25:28 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:50.551 11:25:28 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:50.551 11:25:28 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:50.551 11:25:28 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:50.551 11:25:28 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:50.551 11:25:28 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:50.551 11:25:28 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:50.551 11:25:28 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:50.551 11:25:28 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:06:50.551 11:25:28 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:50.551 11:25:28 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@47 -- # : 0 00:06:50.551 11:25:28 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:50.551 11:25:28 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:50.551 11:25:28 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:50.551 11:25:28 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:50.551 11:25:28 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:50.551 11:25:28 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:50.551 11:25:28 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:50.810 11:25:28 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:50.810 11:25:28 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:06:50.810 11:25:28 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:06:50.810 11:25:28 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:06:50.810 11:25:28 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:06:50.810 11:25:28 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:50.810 11:25:28 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@448 -- # prepare_net_devs 00:06:50.810 11:25:28 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # local -g is_hw=no 00:06:50.810 11:25:28 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@412 -- # remove_spdk_ns 00:06:50.810 11:25:28 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:50.810 11:25:28 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:50.810 11:25:28 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:50.810 11:25:28 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:06:50.810 11:25:28 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:06:50.810 11:25:28 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:06:50.810 11:25:28 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:06:50.810 11:25:28 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:06:50.810 11:25:28 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@432 -- # nvmf_veth_init 00:06:50.810 11:25:28 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:50.810 11:25:28 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:50.810 11:25:28 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:06:50.810 11:25:28 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:06:50.810 11:25:28 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:06:50.810 11:25:28 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:06:50.810 11:25:28 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:06:50.810 11:25:28 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:50.810 11:25:28 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:06:50.810 11:25:28 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:06:50.810 11:25:28 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:06:50.810 11:25:28 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:06:50.810 11:25:28 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:06:50.810 11:25:28 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:06:50.810 Cannot find device "nvmf_tgt_br" 00:06:50.810 11:25:28 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@155 -- # true 00:06:50.810 11:25:28 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:06:50.810 Cannot find device "nvmf_tgt_br2" 00:06:50.810 11:25:28 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@156 -- # true 00:06:50.810 11:25:28 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:06:50.810 11:25:28 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:06:50.810 Cannot find device "nvmf_tgt_br" 00:06:50.810 11:25:28 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@158 -- # true 00:06:50.810 11:25:28 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:06:50.810 Cannot find device "nvmf_tgt_br2" 00:06:50.810 11:25:28 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@159 -- # true 00:06:50.810 11:25:28 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:06:50.810 11:25:28 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:06:50.810 11:25:28 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:06:50.810 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:06:50.810 11:25:28 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@162 -- # true 00:06:50.810 11:25:28 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:06:50.810 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:06:50.810 11:25:28 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@163 -- # true 00:06:50.810 11:25:28 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:06:50.810 11:25:28 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:06:50.810 11:25:28 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:06:50.810 11:25:28 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:06:50.810 11:25:28 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:06:50.810 11:25:28 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:06:50.810 11:25:28 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:06:50.810 11:25:28 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:06:50.810 11:25:28 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:06:51.070 11:25:28 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:06:51.070 11:25:28 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:06:51.070 11:25:28 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:06:51.070 11:25:28 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:06:51.070 11:25:28 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:06:51.070 11:25:28 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:06:51.070 11:25:28 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:06:51.070 11:25:28 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:06:51.070 11:25:28 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:06:51.070 11:25:28 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:06:51.070 11:25:28 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:06:51.070 11:25:28 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:06:51.070 11:25:28 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:06:51.070 11:25:28 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:06:51.070 11:25:28 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:06:51.070 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:51.070 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.114 ms 00:06:51.070 00:06:51.070 --- 10.0.0.2 ping statistics --- 00:06:51.070 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:51.070 rtt min/avg/max/mdev = 0.114/0.114/0.114/0.000 ms 00:06:51.070 11:25:28 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:06:51.070 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:06:51.070 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.058 ms 00:06:51.070 00:06:51.070 --- 10.0.0.3 ping statistics --- 00:06:51.070 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:51.070 rtt min/avg/max/mdev = 0.058/0.058/0.058/0.000 ms 00:06:51.070 11:25:28 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:06:51.070 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:51.070 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.030 ms 00:06:51.070 00:06:51.070 --- 10.0.0.1 ping statistics --- 00:06:51.070 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:51.070 rtt min/avg/max/mdev = 0.030/0.030/0.030/0.000 ms 00:06:51.070 11:25:28 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:51.070 11:25:28 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@433 -- # return 0 00:06:51.070 11:25:28 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:06:51.070 11:25:28 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:51.070 11:25:28 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:06:51.070 11:25:28 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:06:51.070 11:25:28 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:51.070 11:25:28 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:06:51.070 11:25:28 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:06:51.070 11:25:28 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:06:51.070 11:25:28 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:06:51.070 11:25:28 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:51.070 11:25:28 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:06:51.070 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:51.070 11:25:28 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@481 -- # nvmfpid=66698 00:06:51.070 11:25:28 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@482 -- # waitforlisten 66698 00:06:51.070 11:25:28 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:06:51.070 11:25:28 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@829 -- # '[' -z 66698 ']' 00:06:51.070 11:25:28 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:51.070 11:25:28 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:51.070 11:25:28 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:51.070 11:25:28 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:51.070 11:25:28 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:06:51.070 [2024-07-15 11:25:28.473890] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:06:51.070 [2024-07-15 11:25:28.474697] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:51.329 [2024-07-15 11:25:28.620932] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:51.329 [2024-07-15 11:25:28.693060] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:51.329 [2024-07-15 11:25:28.693357] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:51.329 [2024-07-15 11:25:28.693649] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:51.329 [2024-07-15 11:25:28.693917] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:51.329 [2024-07-15 11:25:28.694037] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:51.329 [2024-07-15 11:25:28.694264] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:51.329 [2024-07-15 11:25:28.694351] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:51.329 [2024-07-15 11:25:28.694584] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:51.329 [2024-07-15 11:25:28.694590] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:51.329 11:25:28 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:51.329 11:25:28 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@862 -- # return 0 00:06:51.329 11:25:28 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:06:51.329 11:25:28 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:51.329 11:25:28 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:06:51.587 11:25:28 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:51.587 11:25:28 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:06:51.587 11:25:28 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:51.587 11:25:28 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:06:51.587 [2024-07-15 11:25:28.829569] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:51.587 11:25:28 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:51.587 11:25:28 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:06:51.587 11:25:28 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:51.587 11:25:28 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:06:51.587 11:25:28 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:51.587 11:25:28 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:06:51.587 11:25:28 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:06:51.587 11:25:28 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:51.587 11:25:28 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:06:51.587 11:25:28 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:51.587 11:25:28 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:06:51.587 11:25:28 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:51.587 11:25:28 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:06:51.587 11:25:28 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:51.587 11:25:28 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:51.587 11:25:28 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:51.587 11:25:28 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:06:51.587 [2024-07-15 11:25:28.893788] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:51.587 11:25:28 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:51.588 11:25:28 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 0 -eq 1 ']' 00:06:51.588 11:25:28 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@31 -- # num_iterations=5 00:06:51.588 11:25:28 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:06:54.200 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:06:56.096 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:06:58.627 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:00.610 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:03.143 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:03.143 11:25:40 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:07:03.143 11:25:40 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:07:03.143 11:25:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:03.143 11:25:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # sync 00:07:03.143 11:25:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:03.143 11:25:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@120 -- # set +e 00:07:03.143 11:25:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:03.143 11:25:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:03.143 rmmod nvme_tcp 00:07:03.143 rmmod nvme_fabrics 00:07:03.143 rmmod nvme_keyring 00:07:03.143 11:25:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:03.143 11:25:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set -e 00:07:03.143 11:25:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # return 0 00:07:03.143 11:25:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@489 -- # '[' -n 66698 ']' 00:07:03.143 11:25:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@490 -- # killprocess 66698 00:07:03.143 11:25:40 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@948 -- # '[' -z 66698 ']' 00:07:03.143 11:25:40 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@952 -- # kill -0 66698 00:07:03.143 11:25:40 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@953 -- # uname 00:07:03.143 11:25:40 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:03.143 11:25:40 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 66698 00:07:03.143 killing process with pid 66698 00:07:03.143 11:25:40 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:03.143 11:25:40 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:03.143 11:25:40 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@966 -- # echo 'killing process with pid 66698' 00:07:03.143 11:25:40 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@967 -- # kill 66698 00:07:03.143 11:25:40 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@972 -- # wait 66698 00:07:03.143 11:25:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:03.143 11:25:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:07:03.143 11:25:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:07:03.143 11:25:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:03.143 11:25:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:03.143 11:25:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:03.143 11:25:40 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:03.143 11:25:40 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:03.143 11:25:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:07:03.143 00:07:03.143 real 0m12.453s 00:07:03.143 user 0m45.323s 00:07:03.143 sys 0m1.824s 00:07:03.143 ************************************ 00:07:03.143 END TEST nvmf_connect_disconnect 00:07:03.143 ************************************ 00:07:03.143 11:25:40 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:03.143 11:25:40 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:07:03.143 11:25:40 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:07:03.143 11:25:40 nvmf_tcp -- nvmf/nvmf.sh@28 -- # run_test nvmf_multitarget /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:07:03.143 11:25:40 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:03.143 11:25:40 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:03.143 11:25:40 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:03.143 ************************************ 00:07:03.143 START TEST nvmf_multitarget 00:07:03.143 ************************************ 00:07:03.143 11:25:40 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:07:03.143 * Looking for test storage... 00:07:03.143 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:07:03.143 11:25:40 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:07:03.143 11:25:40 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:07:03.143 11:25:40 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:03.143 11:25:40 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:03.143 11:25:40 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:03.143 11:25:40 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:03.143 11:25:40 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:03.143 11:25:40 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:03.143 11:25:40 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:03.143 11:25:40 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:03.143 11:25:40 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:03.143 11:25:40 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:03.143 11:25:40 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:891080d4-f96c-4735-b9e2-e3ce9892e421 00:07:03.143 11:25:40 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=891080d4-f96c-4735-b9e2-e3ce9892e421 00:07:03.143 11:25:40 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:03.143 11:25:40 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:03.143 11:25:40 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:07:03.143 11:25:40 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:03.143 11:25:40 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:03.143 11:25:40 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:03.143 11:25:40 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:03.143 11:25:40 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:03.143 11:25:40 nvmf_tcp.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:03.143 11:25:40 nvmf_tcp.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:03.143 11:25:40 nvmf_tcp.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:03.143 11:25:40 nvmf_tcp.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:07:03.143 11:25:40 nvmf_tcp.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:03.143 11:25:40 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@47 -- # : 0 00:07:03.143 11:25:40 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:03.143 11:25:40 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:03.143 11:25:40 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:03.143 11:25:40 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:03.143 11:25:40 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:03.143 11:25:40 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:03.143 11:25:40 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:03.143 11:25:40 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:03.143 11:25:40 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py 00:07:03.143 11:25:40 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:07:03.143 11:25:40 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:03.143 11:25:40 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:03.143 11:25:40 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:03.143 11:25:40 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:03.143 11:25:40 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:03.144 11:25:40 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:03.144 11:25:40 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:03.144 11:25:40 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:03.144 11:25:40 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:07:03.144 11:25:40 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:07:03.144 11:25:40 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:07:03.144 11:25:40 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:07:03.144 11:25:40 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:07:03.144 11:25:40 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@432 -- # nvmf_veth_init 00:07:03.144 11:25:40 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:03.144 11:25:40 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:03.144 11:25:40 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:07:03.144 11:25:40 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:07:03.144 11:25:40 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:07:03.144 11:25:40 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:07:03.144 11:25:40 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:07:03.144 11:25:40 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:03.144 11:25:40 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:07:03.144 11:25:40 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:07:03.144 11:25:40 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:07:03.144 11:25:40 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:07:03.144 11:25:40 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:07:03.144 11:25:40 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:07:03.144 Cannot find device "nvmf_tgt_br" 00:07:03.144 11:25:40 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@155 -- # true 00:07:03.144 11:25:40 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:07:03.144 Cannot find device "nvmf_tgt_br2" 00:07:03.144 11:25:40 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@156 -- # true 00:07:03.144 11:25:40 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:07:03.144 11:25:40 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:07:03.144 Cannot find device "nvmf_tgt_br" 00:07:03.144 11:25:40 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@158 -- # true 00:07:03.144 11:25:40 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:07:03.144 Cannot find device "nvmf_tgt_br2" 00:07:03.144 11:25:40 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@159 -- # true 00:07:03.144 11:25:40 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:07:03.403 11:25:40 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:07:03.403 11:25:40 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:07:03.403 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:03.403 11:25:40 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@162 -- # true 00:07:03.403 11:25:40 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:07:03.403 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:03.403 11:25:40 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@163 -- # true 00:07:03.403 11:25:40 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:07:03.403 11:25:40 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:07:03.403 11:25:40 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:07:03.403 11:25:40 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:07:03.403 11:25:40 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:07:03.403 11:25:40 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:07:03.403 11:25:40 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:07:03.403 11:25:40 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:07:03.403 11:25:40 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:07:03.403 11:25:40 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:07:03.403 11:25:40 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:07:03.403 11:25:40 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:07:03.403 11:25:40 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:07:03.403 11:25:40 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:07:03.403 11:25:40 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:07:03.403 11:25:40 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:07:03.403 11:25:40 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:07:03.403 11:25:40 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:07:03.403 11:25:40 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:07:03.403 11:25:40 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:07:03.403 11:25:40 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:07:03.403 11:25:40 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:07:03.403 11:25:40 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:07:03.403 11:25:40 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:07:03.403 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:03.403 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.093 ms 00:07:03.403 00:07:03.403 --- 10.0.0.2 ping statistics --- 00:07:03.403 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:03.403 rtt min/avg/max/mdev = 0.093/0.093/0.093/0.000 ms 00:07:03.403 11:25:40 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:07:03.403 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:07:03.403 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.087 ms 00:07:03.403 00:07:03.403 --- 10.0.0.3 ping statistics --- 00:07:03.403 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:03.403 rtt min/avg/max/mdev = 0.087/0.087/0.087/0.000 ms 00:07:03.403 11:25:40 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:07:03.403 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:03.403 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.048 ms 00:07:03.403 00:07:03.403 --- 10.0.0.1 ping statistics --- 00:07:03.403 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:03.403 rtt min/avg/max/mdev = 0.048/0.048/0.048/0.000 ms 00:07:03.403 11:25:40 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:03.403 11:25:40 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@433 -- # return 0 00:07:03.403 11:25:40 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:03.403 11:25:40 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:03.403 11:25:40 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:03.403 11:25:40 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:03.403 11:25:40 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:03.403 11:25:40 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:03.403 11:25:40 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:03.662 11:25:40 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:07:03.662 11:25:40 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:03.662 11:25:40 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:03.662 11:25:40 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:07:03.662 11:25:40 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@481 -- # nvmfpid=67079 00:07:03.662 11:25:40 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:03.662 11:25:40 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@482 -- # waitforlisten 67079 00:07:03.662 11:25:40 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@829 -- # '[' -z 67079 ']' 00:07:03.662 11:25:40 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:03.662 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:03.662 11:25:40 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:03.662 11:25:40 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:03.662 11:25:40 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:03.662 11:25:40 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:07:03.662 [2024-07-15 11:25:40.960954] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:07:03.662 [2024-07-15 11:25:40.961810] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:03.662 [2024-07-15 11:25:41.095390] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:03.921 [2024-07-15 11:25:41.185295] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:03.921 [2024-07-15 11:25:41.185524] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:03.921 [2024-07-15 11:25:41.185748] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:03.921 [2024-07-15 11:25:41.185911] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:03.921 [2024-07-15 11:25:41.186040] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:03.921 [2024-07-15 11:25:41.186329] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:03.921 [2024-07-15 11:25:41.186415] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:03.921 [2024-07-15 11:25:41.187088] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:03.921 [2024-07-15 11:25:41.187115] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:03.921 11:25:41 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:03.921 11:25:41 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@862 -- # return 0 00:07:03.921 11:25:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:03.921 11:25:41 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:03.921 11:25:41 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:07:03.921 11:25:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:03.921 11:25:41 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:07:03.921 11:25:41 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:07:03.921 11:25:41 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:07:04.179 11:25:41 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:07:04.179 11:25:41 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@25 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:07:04.179 "nvmf_tgt_1" 00:07:04.179 11:25:41 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@26 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:07:04.438 "nvmf_tgt_2" 00:07:04.438 11:25:41 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:07:04.438 11:25:41 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:07:04.697 11:25:41 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:07:04.697 11:25:41 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@32 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:07:04.697 true 00:07:04.697 11:25:42 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@33 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:07:04.697 true 00:07:04.697 11:25:42 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:07:04.697 11:25:42 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:07:04.955 11:25:42 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:07:04.955 11:25:42 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:07:04.955 11:25:42 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:07:04.955 11:25:42 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:04.955 11:25:42 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@117 -- # sync 00:07:04.955 11:25:42 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:04.955 11:25:42 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@120 -- # set +e 00:07:04.955 11:25:42 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:04.955 11:25:42 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:04.955 rmmod nvme_tcp 00:07:04.955 rmmod nvme_fabrics 00:07:04.955 rmmod nvme_keyring 00:07:04.955 11:25:42 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:04.955 11:25:42 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@124 -- # set -e 00:07:04.955 11:25:42 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@125 -- # return 0 00:07:04.955 11:25:42 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@489 -- # '[' -n 67079 ']' 00:07:04.955 11:25:42 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@490 -- # killprocess 67079 00:07:04.955 11:25:42 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@948 -- # '[' -z 67079 ']' 00:07:04.955 11:25:42 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@952 -- # kill -0 67079 00:07:04.955 11:25:42 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@953 -- # uname 00:07:04.955 11:25:42 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:05.213 11:25:42 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 67079 00:07:05.213 11:25:42 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:05.213 11:25:42 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:05.213 11:25:42 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@966 -- # echo 'killing process with pid 67079' 00:07:05.213 killing process with pid 67079 00:07:05.213 11:25:42 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@967 -- # kill 67079 00:07:05.213 11:25:42 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@972 -- # wait 67079 00:07:05.213 11:25:42 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:05.213 11:25:42 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:07:05.213 11:25:42 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:07:05.213 11:25:42 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:05.213 11:25:42 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:05.213 11:25:42 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:05.213 11:25:42 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:05.213 11:25:42 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:05.213 11:25:42 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:07:05.213 00:07:05.213 real 0m2.208s 00:07:05.213 user 0m6.847s 00:07:05.213 sys 0m0.590s 00:07:05.213 11:25:42 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:05.213 11:25:42 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:07:05.213 ************************************ 00:07:05.213 END TEST nvmf_multitarget 00:07:05.213 ************************************ 00:07:05.471 11:25:42 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:07:05.471 11:25:42 nvmf_tcp -- nvmf/nvmf.sh@29 -- # run_test nvmf_rpc /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:07:05.471 11:25:42 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:05.471 11:25:42 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:05.471 11:25:42 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:05.471 ************************************ 00:07:05.471 START TEST nvmf_rpc 00:07:05.471 ************************************ 00:07:05.471 11:25:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:07:05.471 * Looking for test storage... 00:07:05.471 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:07:05.471 11:25:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:07:05.471 11:25:42 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:07:05.471 11:25:42 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:05.471 11:25:42 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:05.471 11:25:42 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:05.471 11:25:42 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:05.471 11:25:42 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:05.471 11:25:42 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:05.471 11:25:42 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:05.471 11:25:42 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:05.471 11:25:42 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:05.471 11:25:42 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:05.471 11:25:42 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:891080d4-f96c-4735-b9e2-e3ce9892e421 00:07:05.471 11:25:42 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=891080d4-f96c-4735-b9e2-e3ce9892e421 00:07:05.471 11:25:42 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:05.471 11:25:42 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:05.471 11:25:42 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:07:05.471 11:25:42 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:05.471 11:25:42 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:05.471 11:25:42 nvmf_tcp.nvmf_rpc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:05.471 11:25:42 nvmf_tcp.nvmf_rpc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:05.471 11:25:42 nvmf_tcp.nvmf_rpc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:05.471 11:25:42 nvmf_tcp.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:05.471 11:25:42 nvmf_tcp.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:05.471 11:25:42 nvmf_tcp.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:05.471 11:25:42 nvmf_tcp.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:07:05.471 11:25:42 nvmf_tcp.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:05.471 11:25:42 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@47 -- # : 0 00:07:05.471 11:25:42 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:05.471 11:25:42 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:05.471 11:25:42 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:05.471 11:25:42 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:05.471 11:25:42 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:05.471 11:25:42 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:05.471 11:25:42 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:05.471 11:25:42 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:05.471 11:25:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:07:05.471 11:25:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:07:05.471 11:25:42 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:05.471 11:25:42 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:05.471 11:25:42 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:05.471 11:25:42 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:05.471 11:25:42 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:05.471 11:25:42 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:05.471 11:25:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:05.471 11:25:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:05.471 11:25:42 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:07:05.471 11:25:42 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:07:05.471 11:25:42 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:07:05.471 11:25:42 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:07:05.471 11:25:42 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:07:05.471 11:25:42 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@432 -- # nvmf_veth_init 00:07:05.471 11:25:42 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:05.471 11:25:42 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:05.471 11:25:42 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:07:05.471 11:25:42 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:07:05.471 11:25:42 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:07:05.471 11:25:42 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:07:05.471 11:25:42 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:07:05.471 11:25:42 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:05.471 11:25:42 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:07:05.471 11:25:42 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:07:05.471 11:25:42 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:07:05.471 11:25:42 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:07:05.471 11:25:42 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:07:05.471 11:25:42 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:07:05.471 Cannot find device "nvmf_tgt_br" 00:07:05.471 11:25:42 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@155 -- # true 00:07:05.471 11:25:42 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:07:05.471 Cannot find device "nvmf_tgt_br2" 00:07:05.471 11:25:42 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@156 -- # true 00:07:05.471 11:25:42 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:07:05.471 11:25:42 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:07:05.471 Cannot find device "nvmf_tgt_br" 00:07:05.471 11:25:42 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@158 -- # true 00:07:05.471 11:25:42 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:07:05.471 Cannot find device "nvmf_tgt_br2" 00:07:05.472 11:25:42 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@159 -- # true 00:07:05.472 11:25:42 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:07:05.472 11:25:42 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:07:05.472 11:25:42 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:07:05.472 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:05.472 11:25:42 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@162 -- # true 00:07:05.472 11:25:42 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:07:05.472 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:05.472 11:25:42 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@163 -- # true 00:07:05.472 11:25:42 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:07:05.472 11:25:42 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:07:05.472 11:25:42 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:07:05.729 11:25:42 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:07:05.729 11:25:42 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:07:05.729 11:25:42 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:07:05.729 11:25:42 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:07:05.729 11:25:43 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:07:05.729 11:25:43 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:07:05.729 11:25:43 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:07:05.729 11:25:43 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:07:05.729 11:25:43 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:07:05.729 11:25:43 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:07:05.729 11:25:43 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:07:05.729 11:25:43 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:07:05.729 11:25:43 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:07:05.729 11:25:43 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:07:05.729 11:25:43 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:07:05.729 11:25:43 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:07:05.729 11:25:43 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:07:05.729 11:25:43 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:07:05.729 11:25:43 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:07:05.729 11:25:43 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:07:05.729 11:25:43 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:07:05.729 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:05.729 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.061 ms 00:07:05.729 00:07:05.729 --- 10.0.0.2 ping statistics --- 00:07:05.729 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:05.729 rtt min/avg/max/mdev = 0.061/0.061/0.061/0.000 ms 00:07:05.729 11:25:43 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:07:05.729 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:07:05.729 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.052 ms 00:07:05.729 00:07:05.729 --- 10.0.0.3 ping statistics --- 00:07:05.729 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:05.729 rtt min/avg/max/mdev = 0.052/0.052/0.052/0.000 ms 00:07:05.729 11:25:43 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:07:05.729 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:05.729 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:07:05.729 00:07:05.729 --- 10.0.0.1 ping statistics --- 00:07:05.729 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:05.729 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:07:05.729 11:25:43 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:05.729 11:25:43 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@433 -- # return 0 00:07:05.729 11:25:43 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:05.729 11:25:43 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:05.729 11:25:43 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:05.729 11:25:43 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:05.729 11:25:43 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:05.729 11:25:43 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:05.729 11:25:43 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:05.729 11:25:43 nvmf_tcp.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:07:05.729 11:25:43 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:05.729 11:25:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:05.729 11:25:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:05.729 11:25:43 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@481 -- # nvmfpid=67291 00:07:05.729 11:25:43 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:05.729 11:25:43 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@482 -- # waitforlisten 67291 00:07:05.729 11:25:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@829 -- # '[' -z 67291 ']' 00:07:05.729 11:25:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:05.729 11:25:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:05.729 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:05.729 11:25:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:05.729 11:25:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:05.729 11:25:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:05.986 [2024-07-15 11:25:43.206717] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:07:05.986 [2024-07-15 11:25:43.206812] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:05.986 [2024-07-15 11:25:43.342962] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:05.986 [2024-07-15 11:25:43.408493] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:05.986 [2024-07-15 11:25:43.408566] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:05.986 [2024-07-15 11:25:43.408579] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:05.986 [2024-07-15 11:25:43.408587] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:05.986 [2024-07-15 11:25:43.408594] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:05.986 [2024-07-15 11:25:43.408698] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:05.986 [2024-07-15 11:25:43.408743] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:05.986 [2024-07-15 11:25:43.408830] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:05.986 [2024-07-15 11:25:43.408835] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:06.243 11:25:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:06.243 11:25:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@862 -- # return 0 00:07:06.243 11:25:43 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:06.243 11:25:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:06.243 11:25:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:06.243 11:25:43 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:06.243 11:25:43 nvmf_tcp.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:07:06.243 11:25:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:06.243 11:25:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:06.243 11:25:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:06.243 11:25:43 nvmf_tcp.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:07:06.243 "poll_groups": [ 00:07:06.243 { 00:07:06.243 "admin_qpairs": 0, 00:07:06.243 "completed_nvme_io": 0, 00:07:06.243 "current_admin_qpairs": 0, 00:07:06.243 "current_io_qpairs": 0, 00:07:06.243 "io_qpairs": 0, 00:07:06.243 "name": "nvmf_tgt_poll_group_000", 00:07:06.243 "pending_bdev_io": 0, 00:07:06.243 "transports": [] 00:07:06.243 }, 00:07:06.243 { 00:07:06.243 "admin_qpairs": 0, 00:07:06.243 "completed_nvme_io": 0, 00:07:06.243 "current_admin_qpairs": 0, 00:07:06.243 "current_io_qpairs": 0, 00:07:06.243 "io_qpairs": 0, 00:07:06.243 "name": "nvmf_tgt_poll_group_001", 00:07:06.243 "pending_bdev_io": 0, 00:07:06.243 "transports": [] 00:07:06.243 }, 00:07:06.243 { 00:07:06.243 "admin_qpairs": 0, 00:07:06.243 "completed_nvme_io": 0, 00:07:06.243 "current_admin_qpairs": 0, 00:07:06.243 "current_io_qpairs": 0, 00:07:06.243 "io_qpairs": 0, 00:07:06.243 "name": "nvmf_tgt_poll_group_002", 00:07:06.243 "pending_bdev_io": 0, 00:07:06.243 "transports": [] 00:07:06.243 }, 00:07:06.243 { 00:07:06.243 "admin_qpairs": 0, 00:07:06.243 "completed_nvme_io": 0, 00:07:06.243 "current_admin_qpairs": 0, 00:07:06.243 "current_io_qpairs": 0, 00:07:06.243 "io_qpairs": 0, 00:07:06.243 "name": "nvmf_tgt_poll_group_003", 00:07:06.243 "pending_bdev_io": 0, 00:07:06.243 "transports": [] 00:07:06.243 } 00:07:06.243 ], 00:07:06.243 "tick_rate": 2200000000 00:07:06.243 }' 00:07:06.243 11:25:43 nvmf_tcp.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:07:06.243 11:25:43 nvmf_tcp.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:07:06.243 11:25:43 nvmf_tcp.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:07:06.243 11:25:43 nvmf_tcp.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:07:06.243 11:25:43 nvmf_tcp.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:07:06.243 11:25:43 nvmf_tcp.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:07:06.243 11:25:43 nvmf_tcp.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:07:06.243 11:25:43 nvmf_tcp.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:06.243 11:25:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:06.243 11:25:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:06.243 [2024-07-15 11:25:43.662275] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:06.243 11:25:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:06.243 11:25:43 nvmf_tcp.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:07:06.243 11:25:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:06.243 11:25:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:06.243 11:25:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:06.243 11:25:43 nvmf_tcp.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:07:06.243 "poll_groups": [ 00:07:06.243 { 00:07:06.243 "admin_qpairs": 0, 00:07:06.243 "completed_nvme_io": 0, 00:07:06.243 "current_admin_qpairs": 0, 00:07:06.243 "current_io_qpairs": 0, 00:07:06.243 "io_qpairs": 0, 00:07:06.243 "name": "nvmf_tgt_poll_group_000", 00:07:06.243 "pending_bdev_io": 0, 00:07:06.243 "transports": [ 00:07:06.243 { 00:07:06.243 "trtype": "TCP" 00:07:06.243 } 00:07:06.243 ] 00:07:06.243 }, 00:07:06.243 { 00:07:06.243 "admin_qpairs": 0, 00:07:06.243 "completed_nvme_io": 0, 00:07:06.243 "current_admin_qpairs": 0, 00:07:06.243 "current_io_qpairs": 0, 00:07:06.243 "io_qpairs": 0, 00:07:06.243 "name": "nvmf_tgt_poll_group_001", 00:07:06.243 "pending_bdev_io": 0, 00:07:06.243 "transports": [ 00:07:06.243 { 00:07:06.243 "trtype": "TCP" 00:07:06.243 } 00:07:06.243 ] 00:07:06.243 }, 00:07:06.243 { 00:07:06.243 "admin_qpairs": 0, 00:07:06.243 "completed_nvme_io": 0, 00:07:06.243 "current_admin_qpairs": 0, 00:07:06.243 "current_io_qpairs": 0, 00:07:06.243 "io_qpairs": 0, 00:07:06.243 "name": "nvmf_tgt_poll_group_002", 00:07:06.243 "pending_bdev_io": 0, 00:07:06.243 "transports": [ 00:07:06.243 { 00:07:06.243 "trtype": "TCP" 00:07:06.243 } 00:07:06.243 ] 00:07:06.243 }, 00:07:06.243 { 00:07:06.243 "admin_qpairs": 0, 00:07:06.243 "completed_nvme_io": 0, 00:07:06.243 "current_admin_qpairs": 0, 00:07:06.243 "current_io_qpairs": 0, 00:07:06.243 "io_qpairs": 0, 00:07:06.243 "name": "nvmf_tgt_poll_group_003", 00:07:06.243 "pending_bdev_io": 0, 00:07:06.243 "transports": [ 00:07:06.243 { 00:07:06.243 "trtype": "TCP" 00:07:06.243 } 00:07:06.243 ] 00:07:06.243 } 00:07:06.243 ], 00:07:06.243 "tick_rate": 2200000000 00:07:06.243 }' 00:07:06.243 11:25:43 nvmf_tcp.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:07:06.243 11:25:43 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:07:06.243 11:25:43 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:07:06.243 11:25:43 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:07:06.500 11:25:43 nvmf_tcp.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:07:06.500 11:25:43 nvmf_tcp.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:07:06.500 11:25:43 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:07:06.500 11:25:43 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:07:06.500 11:25:43 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:07:06.500 11:25:43 nvmf_tcp.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:07:06.500 11:25:43 nvmf_tcp.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:07:06.500 11:25:43 nvmf_tcp.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:07:06.500 11:25:43 nvmf_tcp.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:07:06.500 11:25:43 nvmf_tcp.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:07:06.500 11:25:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:06.500 11:25:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:06.500 Malloc1 00:07:06.500 11:25:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:06.500 11:25:43 nvmf_tcp.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:07:06.500 11:25:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:06.500 11:25:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:06.500 11:25:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:06.500 11:25:43 nvmf_tcp.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:07:06.500 11:25:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:06.500 11:25:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:06.500 11:25:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:06.500 11:25:43 nvmf_tcp.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:07:06.500 11:25:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:06.500 11:25:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:06.500 11:25:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:06.500 11:25:43 nvmf_tcp.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:06.500 11:25:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:06.500 11:25:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:06.500 [2024-07-15 11:25:43.838831] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:06.500 11:25:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:06.500 11:25:43 nvmf_tcp.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:891080d4-f96c-4735-b9e2-e3ce9892e421 --hostid=891080d4-f96c-4735-b9e2-e3ce9892e421 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:891080d4-f96c-4735-b9e2-e3ce9892e421 -a 10.0.0.2 -s 4420 00:07:06.500 11:25:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@648 -- # local es=0 00:07:06.500 11:25:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:891080d4-f96c-4735-b9e2-e3ce9892e421 --hostid=891080d4-f96c-4735-b9e2-e3ce9892e421 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:891080d4-f96c-4735-b9e2-e3ce9892e421 -a 10.0.0.2 -s 4420 00:07:06.500 11:25:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@636 -- # local arg=nvme 00:07:06.500 11:25:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:06.500 11:25:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # type -t nvme 00:07:06.500 11:25:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:06.500 11:25:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # type -P nvme 00:07:06.500 11:25:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:06.500 11:25:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # arg=/usr/sbin/nvme 00:07:06.500 11:25:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # [[ -x /usr/sbin/nvme ]] 00:07:06.500 11:25:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:891080d4-f96c-4735-b9e2-e3ce9892e421 --hostid=891080d4-f96c-4735-b9e2-e3ce9892e421 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:891080d4-f96c-4735-b9e2-e3ce9892e421 -a 10.0.0.2 -s 4420 00:07:06.500 [2024-07-15 11:25:43.867140] ctrlr.c: 822:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:891080d4-f96c-4735-b9e2-e3ce9892e421' 00:07:06.500 Failed to write to /dev/nvme-fabrics: Input/output error 00:07:06.500 could not add new controller: failed to write to nvme-fabrics device 00:07:06.500 11:25:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # es=1 00:07:06.500 11:25:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:06.500 11:25:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:06.500 11:25:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:06.500 11:25:43 nvmf_tcp.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:891080d4-f96c-4735-b9e2-e3ce9892e421 00:07:06.500 11:25:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:06.500 11:25:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:06.500 11:25:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:06.500 11:25:43 nvmf_tcp.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:891080d4-f96c-4735-b9e2-e3ce9892e421 --hostid=891080d4-f96c-4735-b9e2-e3ce9892e421 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:06.757 11:25:44 nvmf_tcp.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:07:06.758 11:25:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:07:06.758 11:25:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:07:06.758 11:25:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:07:06.758 11:25:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:07:08.655 11:25:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:07:08.655 11:25:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:07:08.655 11:25:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:07:08.655 11:25:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:07:08.655 11:25:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:07:08.655 11:25:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:07:08.655 11:25:46 nvmf_tcp.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:07:08.655 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:08.655 11:25:46 nvmf_tcp.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:07:08.655 11:25:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:07:08.655 11:25:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:07:08.655 11:25:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:08.655 11:25:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:07:08.655 11:25:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:08.655 11:25:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:07:08.655 11:25:46 nvmf_tcp.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:891080d4-f96c-4735-b9e2-e3ce9892e421 00:07:08.655 11:25:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:08.655 11:25:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:08.912 11:25:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:08.912 11:25:46 nvmf_tcp.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:891080d4-f96c-4735-b9e2-e3ce9892e421 --hostid=891080d4-f96c-4735-b9e2-e3ce9892e421 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:08.912 11:25:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@648 -- # local es=0 00:07:08.912 11:25:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:891080d4-f96c-4735-b9e2-e3ce9892e421 --hostid=891080d4-f96c-4735-b9e2-e3ce9892e421 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:08.912 11:25:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@636 -- # local arg=nvme 00:07:08.912 11:25:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:08.912 11:25:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # type -t nvme 00:07:08.912 11:25:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:08.912 11:25:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # type -P nvme 00:07:08.912 11:25:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:08.912 11:25:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # arg=/usr/sbin/nvme 00:07:08.912 11:25:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # [[ -x /usr/sbin/nvme ]] 00:07:08.912 11:25:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:891080d4-f96c-4735-b9e2-e3ce9892e421 --hostid=891080d4-f96c-4735-b9e2-e3ce9892e421 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:08.912 [2024-07-15 11:25:46.158300] ctrlr.c: 822:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:891080d4-f96c-4735-b9e2-e3ce9892e421' 00:07:08.912 Failed to write to /dev/nvme-fabrics: Input/output error 00:07:08.912 could not add new controller: failed to write to nvme-fabrics device 00:07:08.912 11:25:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # es=1 00:07:08.912 11:25:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:08.912 11:25:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:08.912 11:25:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:08.912 11:25:46 nvmf_tcp.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:07:08.912 11:25:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:08.912 11:25:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:08.912 11:25:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:08.912 11:25:46 nvmf_tcp.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:891080d4-f96c-4735-b9e2-e3ce9892e421 --hostid=891080d4-f96c-4735-b9e2-e3ce9892e421 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:08.912 11:25:46 nvmf_tcp.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:07:08.912 11:25:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:07:08.912 11:25:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:07:08.912 11:25:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:07:08.912 11:25:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:07:11.443 11:25:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:07:11.443 11:25:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:07:11.443 11:25:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:07:11.443 11:25:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:07:11.443 11:25:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:07:11.443 11:25:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:07:11.443 11:25:48 nvmf_tcp.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:07:11.443 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:11.443 11:25:48 nvmf_tcp.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:07:11.443 11:25:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:07:11.443 11:25:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:07:11.443 11:25:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:11.443 11:25:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:07:11.443 11:25:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:11.443 11:25:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:07:11.443 11:25:48 nvmf_tcp.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:11.443 11:25:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:11.443 11:25:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:11.443 11:25:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:11.443 11:25:48 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:07:11.443 11:25:48 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:07:11.443 11:25:48 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:07:11.443 11:25:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:11.443 11:25:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:11.443 11:25:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:11.443 11:25:48 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:11.443 11:25:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:11.443 11:25:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:11.443 [2024-07-15 11:25:48.453335] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:11.443 11:25:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:11.443 11:25:48 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:07:11.443 11:25:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:11.443 11:25:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:11.443 11:25:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:11.443 11:25:48 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:07:11.443 11:25:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:11.443 11:25:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:11.443 11:25:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:11.443 11:25:48 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:891080d4-f96c-4735-b9e2-e3ce9892e421 --hostid=891080d4-f96c-4735-b9e2-e3ce9892e421 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:11.443 11:25:48 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:07:11.443 11:25:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:07:11.443 11:25:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:07:11.443 11:25:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:07:11.443 11:25:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:07:13.340 11:25:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:07:13.340 11:25:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:07:13.340 11:25:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:07:13.340 11:25:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:07:13.340 11:25:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:07:13.340 11:25:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:07:13.340 11:25:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:07:13.340 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:13.340 11:25:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:07:13.340 11:25:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:07:13.340 11:25:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:13.340 11:25:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:07:13.340 11:25:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:07:13.340 11:25:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:13.340 11:25:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:07:13.340 11:25:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:13.340 11:25:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:13.340 11:25:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:13.340 11:25:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:13.340 11:25:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:13.340 11:25:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:13.340 11:25:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:13.340 11:25:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:13.340 11:25:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:07:13.340 11:25:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:07:13.340 11:25:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:13.341 11:25:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:13.341 11:25:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:13.341 11:25:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:13.341 11:25:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:13.341 11:25:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:13.341 [2024-07-15 11:25:50.764372] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:13.341 11:25:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:13.341 11:25:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:07:13.341 11:25:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:13.341 11:25:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:13.341 11:25:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:13.341 11:25:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:07:13.341 11:25:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:13.341 11:25:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:13.341 11:25:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:13.341 11:25:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:891080d4-f96c-4735-b9e2-e3ce9892e421 --hostid=891080d4-f96c-4735-b9e2-e3ce9892e421 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:13.600 11:25:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:07:13.600 11:25:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:07:13.600 11:25:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:07:13.600 11:25:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:07:13.600 11:25:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:07:15.501 11:25:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:07:15.501 11:25:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:07:15.501 11:25:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:07:15.501 11:25:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:07:15.501 11:25:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:07:15.501 11:25:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:07:15.501 11:25:52 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:07:15.760 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:15.760 11:25:53 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:07:15.760 11:25:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:07:15.760 11:25:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:15.760 11:25:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:07:15.760 11:25:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:07:15.760 11:25:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:15.760 11:25:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:07:15.760 11:25:53 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:15.760 11:25:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:15.760 11:25:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:15.760 11:25:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:15.760 11:25:53 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:15.760 11:25:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:15.760 11:25:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:15.760 11:25:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:15.760 11:25:53 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:07:15.760 11:25:53 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:07:15.760 11:25:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:15.760 11:25:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:15.760 11:25:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:15.761 11:25:53 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:15.761 11:25:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:15.761 11:25:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:15.761 [2024-07-15 11:25:53.071635] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:15.761 11:25:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:15.761 11:25:53 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:07:15.761 11:25:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:15.761 11:25:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:15.761 11:25:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:15.761 11:25:53 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:07:15.761 11:25:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:15.761 11:25:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:15.761 11:25:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:15.761 11:25:53 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:891080d4-f96c-4735-b9e2-e3ce9892e421 --hostid=891080d4-f96c-4735-b9e2-e3ce9892e421 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:16.017 11:25:53 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:07:16.017 11:25:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:07:16.017 11:25:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:07:16.017 11:25:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:07:16.017 11:25:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:07:17.915 11:25:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:07:17.915 11:25:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:07:17.915 11:25:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:07:17.915 11:25:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:07:17.915 11:25:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:07:17.915 11:25:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:07:17.915 11:25:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:07:17.915 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:17.915 11:25:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:07:17.915 11:25:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:07:17.915 11:25:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:17.915 11:25:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:07:17.915 11:25:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:17.915 11:25:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:07:17.915 11:25:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:07:17.915 11:25:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:17.915 11:25:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:17.915 11:25:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:17.915 11:25:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:17.915 11:25:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:17.915 11:25:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:17.915 11:25:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:17.915 11:25:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:17.915 11:25:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:07:17.915 11:25:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:07:17.915 11:25:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:17.915 11:25:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:17.915 11:25:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:17.915 11:25:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:17.915 11:25:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:17.915 11:25:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:17.915 [2024-07-15 11:25:55.362868] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:17.915 11:25:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:17.915 11:25:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:07:17.915 11:25:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:17.915 11:25:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:17.915 11:25:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:17.915 11:25:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:07:17.915 11:25:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:17.915 11:25:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:17.915 11:25:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:17.916 11:25:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:891080d4-f96c-4735-b9e2-e3ce9892e421 --hostid=891080d4-f96c-4735-b9e2-e3ce9892e421 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:18.173 11:25:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:07:18.173 11:25:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:07:18.173 11:25:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:07:18.173 11:25:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:07:18.173 11:25:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:07:20.702 11:25:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:07:20.702 11:25:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:07:20.702 11:25:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:07:20.702 11:25:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:07:20.702 11:25:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:07:20.702 11:25:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:07:20.702 11:25:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:07:20.702 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:20.702 11:25:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:07:20.702 11:25:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:07:20.702 11:25:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:20.702 11:25:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:07:20.702 11:25:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:07:20.702 11:25:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:20.702 11:25:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:07:20.702 11:25:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:20.702 11:25:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:20.702 11:25:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:20.702 11:25:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:20.702 11:25:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:20.702 11:25:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:20.702 11:25:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:20.702 11:25:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:20.702 11:25:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:07:20.702 11:25:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:07:20.702 11:25:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:20.702 11:25:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:20.702 11:25:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:20.702 11:25:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:20.702 11:25:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:20.702 11:25:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:20.702 [2024-07-15 11:25:57.657962] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:20.702 11:25:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:20.702 11:25:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:07:20.702 11:25:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:20.702 11:25:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:20.702 11:25:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:20.702 11:25:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:07:20.702 11:25:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:20.702 11:25:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:20.702 11:25:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:20.702 11:25:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:891080d4-f96c-4735-b9e2-e3ce9892e421 --hostid=891080d4-f96c-4735-b9e2-e3ce9892e421 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:20.702 11:25:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:07:20.702 11:25:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:07:20.702 11:25:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:07:20.702 11:25:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:07:20.702 11:25:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:07:22.600 11:25:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:07:22.600 11:25:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:07:22.600 11:25:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:07:22.600 11:25:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:07:22.600 11:25:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:07:22.600 11:25:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:07:22.600 11:25:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:07:22.600 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:22.600 11:25:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:07:22.600 11:25:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:07:22.600 11:25:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:07:22.600 11:25:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:22.600 11:25:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:07:22.600 11:25:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:22.600 11:25:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:07:22.600 11:25:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:22.600 11:25:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:22.600 11:25:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:22.600 11:25:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:22.600 11:25:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:22.600 11:25:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:22.600 11:25:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:22.600 11:25:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:22.600 11:25:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:07:22.600 11:25:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:07:22.600 11:25:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:07:22.600 11:25:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:22.600 11:25:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:22.600 11:25:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:22.600 11:25:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:22.600 11:25:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:22.600 11:25:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:22.600 [2024-07-15 11:25:59.965092] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:22.600 11:25:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:22.600 11:25:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:07:22.600 11:25:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:22.600 11:25:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:22.600 11:25:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:22.600 11:25:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:07:22.600 11:25:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:22.600 11:25:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:22.600 11:25:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:22.600 11:25:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:22.600 11:25:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:22.600 11:25:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:22.600 11:25:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:22.600 11:25:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:22.600 11:25:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:22.600 11:25:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:22.600 11:26:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:22.600 11:26:00 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:07:22.600 11:26:00 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:07:22.600 11:26:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:22.600 11:26:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:22.600 11:26:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:22.600 11:26:00 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:22.600 11:26:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:22.600 11:26:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:22.600 [2024-07-15 11:26:00.013129] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:22.600 11:26:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:22.600 11:26:00 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:07:22.600 11:26:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:22.600 11:26:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:22.600 11:26:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:22.600 11:26:00 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:07:22.600 11:26:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:22.600 11:26:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:22.600 11:26:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:22.600 11:26:00 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:22.600 11:26:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:22.600 11:26:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:22.600 11:26:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:22.600 11:26:00 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:22.600 11:26:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:22.600 11:26:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:22.600 11:26:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:22.600 11:26:00 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:07:22.600 11:26:00 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:07:22.600 11:26:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:22.600 11:26:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:22.600 11:26:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:22.600 11:26:00 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:22.600 11:26:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:22.600 11:26:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:22.600 [2024-07-15 11:26:00.061156] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:22.600 11:26:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:22.600 11:26:00 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:07:22.600 11:26:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:22.600 11:26:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:22.600 11:26:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:22.600 11:26:00 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:07:22.600 11:26:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:22.600 11:26:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:22.858 11:26:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:22.858 11:26:00 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:22.858 11:26:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:22.858 11:26:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:22.858 11:26:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:22.858 11:26:00 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:22.858 11:26:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:22.858 11:26:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:22.858 11:26:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:22.858 11:26:00 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:07:22.858 11:26:00 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:07:22.858 11:26:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:22.858 11:26:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:22.858 11:26:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:22.858 11:26:00 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:22.858 11:26:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:22.858 11:26:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:22.858 [2024-07-15 11:26:00.109200] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:22.859 11:26:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:22.859 11:26:00 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:07:22.859 11:26:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:22.859 11:26:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:22.859 11:26:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:22.859 11:26:00 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:07:22.859 11:26:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:22.859 11:26:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:22.859 11:26:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:22.859 11:26:00 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:22.859 11:26:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:22.859 11:26:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:22.859 11:26:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:22.859 11:26:00 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:22.859 11:26:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:22.859 11:26:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:22.859 11:26:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:22.859 11:26:00 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:07:22.859 11:26:00 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:07:22.859 11:26:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:22.859 11:26:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:22.859 11:26:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:22.859 11:26:00 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:22.859 11:26:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:22.859 11:26:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:22.859 [2024-07-15 11:26:00.157261] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:22.859 11:26:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:22.859 11:26:00 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:07:22.859 11:26:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:22.859 11:26:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:22.859 11:26:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:22.859 11:26:00 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:07:22.859 11:26:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:22.859 11:26:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:22.859 11:26:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:22.859 11:26:00 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:22.859 11:26:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:22.859 11:26:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:22.859 11:26:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:22.859 11:26:00 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:22.859 11:26:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:22.859 11:26:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:22.859 11:26:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:22.859 11:26:00 nvmf_tcp.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:07:22.859 11:26:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:22.859 11:26:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:22.859 11:26:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:22.859 11:26:00 nvmf_tcp.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:07:22.859 "poll_groups": [ 00:07:22.859 { 00:07:22.859 "admin_qpairs": 2, 00:07:22.859 "completed_nvme_io": 66, 00:07:22.859 "current_admin_qpairs": 0, 00:07:22.859 "current_io_qpairs": 0, 00:07:22.859 "io_qpairs": 16, 00:07:22.859 "name": "nvmf_tgt_poll_group_000", 00:07:22.859 "pending_bdev_io": 0, 00:07:22.859 "transports": [ 00:07:22.859 { 00:07:22.859 "trtype": "TCP" 00:07:22.859 } 00:07:22.859 ] 00:07:22.859 }, 00:07:22.859 { 00:07:22.859 "admin_qpairs": 3, 00:07:22.859 "completed_nvme_io": 68, 00:07:22.859 "current_admin_qpairs": 0, 00:07:22.859 "current_io_qpairs": 0, 00:07:22.859 "io_qpairs": 17, 00:07:22.859 "name": "nvmf_tgt_poll_group_001", 00:07:22.859 "pending_bdev_io": 0, 00:07:22.859 "transports": [ 00:07:22.859 { 00:07:22.859 "trtype": "TCP" 00:07:22.859 } 00:07:22.859 ] 00:07:22.859 }, 00:07:22.859 { 00:07:22.859 "admin_qpairs": 1, 00:07:22.859 "completed_nvme_io": 119, 00:07:22.859 "current_admin_qpairs": 0, 00:07:22.859 "current_io_qpairs": 0, 00:07:22.859 "io_qpairs": 19, 00:07:22.859 "name": "nvmf_tgt_poll_group_002", 00:07:22.859 "pending_bdev_io": 0, 00:07:22.859 "transports": [ 00:07:22.859 { 00:07:22.859 "trtype": "TCP" 00:07:22.859 } 00:07:22.859 ] 00:07:22.859 }, 00:07:22.859 { 00:07:22.859 "admin_qpairs": 1, 00:07:22.859 "completed_nvme_io": 167, 00:07:22.859 "current_admin_qpairs": 0, 00:07:22.859 "current_io_qpairs": 0, 00:07:22.859 "io_qpairs": 18, 00:07:22.859 "name": "nvmf_tgt_poll_group_003", 00:07:22.859 "pending_bdev_io": 0, 00:07:22.859 "transports": [ 00:07:22.859 { 00:07:22.859 "trtype": "TCP" 00:07:22.859 } 00:07:22.859 ] 00:07:22.859 } 00:07:22.859 ], 00:07:22.859 "tick_rate": 2200000000 00:07:22.859 }' 00:07:22.859 11:26:00 nvmf_tcp.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:07:22.859 11:26:00 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:07:22.859 11:26:00 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:07:22.859 11:26:00 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:07:22.859 11:26:00 nvmf_tcp.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:07:22.859 11:26:00 nvmf_tcp.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:07:22.859 11:26:00 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:07:22.859 11:26:00 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:07:22.859 11:26:00 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:07:23.117 11:26:00 nvmf_tcp.nvmf_rpc -- target/rpc.sh@113 -- # (( 70 > 0 )) 00:07:23.117 11:26:00 nvmf_tcp.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:07:23.117 11:26:00 nvmf_tcp.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:07:23.117 11:26:00 nvmf_tcp.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:07:23.117 11:26:00 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:23.117 11:26:00 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@117 -- # sync 00:07:23.117 11:26:00 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:23.117 11:26:00 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@120 -- # set +e 00:07:23.117 11:26:00 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:23.117 11:26:00 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:23.118 rmmod nvme_tcp 00:07:23.118 rmmod nvme_fabrics 00:07:23.118 rmmod nvme_keyring 00:07:23.118 11:26:00 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:23.118 11:26:00 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@124 -- # set -e 00:07:23.118 11:26:00 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@125 -- # return 0 00:07:23.118 11:26:00 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@489 -- # '[' -n 67291 ']' 00:07:23.118 11:26:00 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@490 -- # killprocess 67291 00:07:23.118 11:26:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@948 -- # '[' -z 67291 ']' 00:07:23.118 11:26:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@952 -- # kill -0 67291 00:07:23.118 11:26:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@953 -- # uname 00:07:23.118 11:26:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:23.118 11:26:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 67291 00:07:23.118 11:26:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:23.118 11:26:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:23.118 killing process with pid 67291 00:07:23.118 11:26:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 67291' 00:07:23.118 11:26:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@967 -- # kill 67291 00:07:23.118 11:26:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@972 -- # wait 67291 00:07:23.376 11:26:00 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:23.376 11:26:00 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:07:23.376 11:26:00 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:07:23.376 11:26:00 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:23.376 11:26:00 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:23.376 11:26:00 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:23.376 11:26:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:23.376 11:26:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:23.376 11:26:00 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:07:23.376 00:07:23.376 real 0m17.946s 00:07:23.376 user 1m7.198s 00:07:23.376 sys 0m2.582s 00:07:23.376 11:26:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:23.376 11:26:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:23.376 ************************************ 00:07:23.376 END TEST nvmf_rpc 00:07:23.376 ************************************ 00:07:23.376 11:26:00 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:07:23.376 11:26:00 nvmf_tcp -- nvmf/nvmf.sh@30 -- # run_test nvmf_invalid /home/vagrant/spdk_repo/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:07:23.376 11:26:00 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:23.376 11:26:00 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:23.376 11:26:00 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:23.376 ************************************ 00:07:23.376 START TEST nvmf_invalid 00:07:23.376 ************************************ 00:07:23.376 11:26:00 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:07:23.376 * Looking for test storage... 00:07:23.376 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:07:23.376 11:26:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:07:23.376 11:26:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:07:23.376 11:26:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:23.376 11:26:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:23.376 11:26:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:23.376 11:26:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:23.376 11:26:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:23.376 11:26:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:23.376 11:26:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:23.376 11:26:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:23.376 11:26:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:23.376 11:26:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:23.376 11:26:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:891080d4-f96c-4735-b9e2-e3ce9892e421 00:07:23.376 11:26:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=891080d4-f96c-4735-b9e2-e3ce9892e421 00:07:23.376 11:26:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:23.376 11:26:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:23.376 11:26:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:07:23.376 11:26:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:23.376 11:26:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:23.376 11:26:00 nvmf_tcp.nvmf_invalid -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:23.376 11:26:00 nvmf_tcp.nvmf_invalid -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:23.376 11:26:00 nvmf_tcp.nvmf_invalid -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:23.376 11:26:00 nvmf_tcp.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:23.376 11:26:00 nvmf_tcp.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:23.376 11:26:00 nvmf_tcp.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:23.376 11:26:00 nvmf_tcp.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:07:23.376 11:26:00 nvmf_tcp.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:23.376 11:26:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@47 -- # : 0 00:07:23.376 11:26:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:23.376 11:26:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:23.376 11:26:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:23.376 11:26:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:23.376 11:26:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:23.376 11:26:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:23.376 11:26:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:23.376 11:26:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:23.376 11:26:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py 00:07:23.376 11:26:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:23.376 11:26:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:07:23.376 11:26:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:07:23.376 11:26:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:07:23.376 11:26:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:07:23.376 11:26:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:23.376 11:26:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:23.376 11:26:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:23.376 11:26:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:23.376 11:26:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:23.376 11:26:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:23.376 11:26:00 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:23.376 11:26:00 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:23.376 11:26:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:07:23.376 11:26:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:07:23.376 11:26:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:07:23.376 11:26:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:07:23.376 11:26:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:07:23.376 11:26:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@432 -- # nvmf_veth_init 00:07:23.376 11:26:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:23.376 11:26:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:23.376 11:26:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:07:23.376 11:26:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:07:23.376 11:26:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:07:23.376 11:26:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:07:23.376 11:26:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:07:23.376 11:26:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:23.376 11:26:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:07:23.376 11:26:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:07:23.376 11:26:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:07:23.376 11:26:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:07:23.376 11:26:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:07:23.377 11:26:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:07:23.377 Cannot find device "nvmf_tgt_br" 00:07:23.377 11:26:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@155 -- # true 00:07:23.377 11:26:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:07:23.377 Cannot find device "nvmf_tgt_br2" 00:07:23.377 11:26:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@156 -- # true 00:07:23.377 11:26:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:07:23.377 11:26:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:07:23.377 Cannot find device "nvmf_tgt_br" 00:07:23.377 11:26:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@158 -- # true 00:07:23.377 11:26:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:07:23.377 Cannot find device "nvmf_tgt_br2" 00:07:23.377 11:26:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@159 -- # true 00:07:23.377 11:26:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:07:23.634 11:26:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:07:23.634 11:26:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:07:23.634 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:23.634 11:26:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@162 -- # true 00:07:23.634 11:26:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:07:23.634 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:23.634 11:26:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@163 -- # true 00:07:23.634 11:26:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:07:23.634 11:26:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:07:23.634 11:26:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:07:23.634 11:26:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:07:23.634 11:26:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:07:23.634 11:26:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:07:23.634 11:26:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:07:23.634 11:26:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:07:23.634 11:26:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:07:23.634 11:26:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:07:23.634 11:26:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:07:23.634 11:26:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:07:23.634 11:26:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:07:23.634 11:26:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:07:23.634 11:26:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:07:23.634 11:26:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:07:23.634 11:26:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:07:23.634 11:26:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:07:23.634 11:26:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:07:23.634 11:26:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:07:23.634 11:26:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:07:23.634 11:26:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:07:23.634 11:26:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:07:23.634 11:26:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:07:23.634 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:23.634 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.100 ms 00:07:23.634 00:07:23.634 --- 10.0.0.2 ping statistics --- 00:07:23.634 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:23.634 rtt min/avg/max/mdev = 0.100/0.100/0.100/0.000 ms 00:07:23.634 11:26:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:07:23.634 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:07:23.634 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.063 ms 00:07:23.634 00:07:23.634 --- 10.0.0.3 ping statistics --- 00:07:23.634 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:23.634 rtt min/avg/max/mdev = 0.063/0.063/0.063/0.000 ms 00:07:23.634 11:26:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:07:23.634 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:23.634 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.048 ms 00:07:23.634 00:07:23.634 --- 10.0.0.1 ping statistics --- 00:07:23.634 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:23.634 rtt min/avg/max/mdev = 0.048/0.048/0.048/0.000 ms 00:07:23.634 11:26:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:23.634 11:26:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@433 -- # return 0 00:07:23.634 11:26:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:23.634 11:26:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:23.634 11:26:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:23.634 11:26:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:23.634 11:26:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:23.634 11:26:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:23.634 11:26:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:23.892 11:26:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:07:23.892 11:26:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:23.892 11:26:01 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:23.892 11:26:01 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:07:23.892 11:26:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@481 -- # nvmfpid=67789 00:07:23.892 11:26:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:23.892 11:26:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@482 -- # waitforlisten 67789 00:07:23.892 11:26:01 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@829 -- # '[' -z 67789 ']' 00:07:23.892 11:26:01 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:23.892 11:26:01 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:23.892 11:26:01 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:23.892 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:23.892 11:26:01 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:23.892 11:26:01 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:07:23.892 [2024-07-15 11:26:01.167891] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:07:23.892 [2024-07-15 11:26:01.167988] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:23.892 [2024-07-15 11:26:01.305442] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:23.892 [2024-07-15 11:26:01.366382] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:23.892 [2024-07-15 11:26:01.366441] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:23.892 [2024-07-15 11:26:01.366454] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:23.892 [2024-07-15 11:26:01.366462] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:23.892 [2024-07-15 11:26:01.366470] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:23.892 [2024-07-15 11:26:01.366666] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:23.892 [2024-07-15 11:26:01.366728] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:23.892 [2024-07-15 11:26:01.367272] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:23.892 [2024-07-15 11:26:01.367325] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:24.823 11:26:02 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:24.823 11:26:02 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@862 -- # return 0 00:07:24.823 11:26:02 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:24.823 11:26:02 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:24.823 11:26:02 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:07:24.823 11:26:02 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:24.823 11:26:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:07:24.823 11:26:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode1470 00:07:25.080 [2024-07-15 11:26:02.426193] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:07:25.080 11:26:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@40 -- # out='2024/07/15 11:26:02 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode1470 tgt_name:foobar], err: error received for nvmf_create_subsystem method, err: Code=-32603 Msg=Unable to find target foobar 00:07:25.080 request: 00:07:25.080 { 00:07:25.080 "method": "nvmf_create_subsystem", 00:07:25.080 "params": { 00:07:25.080 "nqn": "nqn.2016-06.io.spdk:cnode1470", 00:07:25.080 "tgt_name": "foobar" 00:07:25.080 } 00:07:25.080 } 00:07:25.080 Got JSON-RPC error response 00:07:25.080 GoRPCClient: error on JSON-RPC call' 00:07:25.080 11:26:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@41 -- # [[ 2024/07/15 11:26:02 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode1470 tgt_name:foobar], err: error received for nvmf_create_subsystem method, err: Code=-32603 Msg=Unable to find target foobar 00:07:25.080 request: 00:07:25.080 { 00:07:25.080 "method": "nvmf_create_subsystem", 00:07:25.080 "params": { 00:07:25.080 "nqn": "nqn.2016-06.io.spdk:cnode1470", 00:07:25.080 "tgt_name": "foobar" 00:07:25.080 } 00:07:25.080 } 00:07:25.080 Got JSON-RPC error response 00:07:25.080 GoRPCClient: error on JSON-RPC call == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:07:25.080 11:26:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:07:25.080 11:26:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode21364 00:07:25.337 [2024-07-15 11:26:02.690463] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode21364: invalid serial number 'SPDKISFASTANDAWESOME' 00:07:25.337 11:26:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # out='2024/07/15 11:26:02 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode21364 serial_number:SPDKISFASTANDAWESOME], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid SN SPDKISFASTANDAWESOME 00:07:25.337 request: 00:07:25.337 { 00:07:25.337 "method": "nvmf_create_subsystem", 00:07:25.337 "params": { 00:07:25.337 "nqn": "nqn.2016-06.io.spdk:cnode21364", 00:07:25.337 "serial_number": "SPDKISFASTANDAWESOME\u001f" 00:07:25.337 } 00:07:25.337 } 00:07:25.337 Got JSON-RPC error response 00:07:25.337 GoRPCClient: error on JSON-RPC call' 00:07:25.337 11:26:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@46 -- # [[ 2024/07/15 11:26:02 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode21364 serial_number:SPDKISFASTANDAWESOME], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid SN SPDKISFASTANDAWESOME 00:07:25.337 request: 00:07:25.337 { 00:07:25.337 "method": "nvmf_create_subsystem", 00:07:25.337 "params": { 00:07:25.337 "nqn": "nqn.2016-06.io.spdk:cnode21364", 00:07:25.337 "serial_number": "SPDKISFASTANDAWESOME\u001f" 00:07:25.337 } 00:07:25.337 } 00:07:25.337 Got JSON-RPC error response 00:07:25.337 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \S\N* ]] 00:07:25.337 11:26:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:07:25.337 11:26:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode8011 00:07:25.595 [2024-07-15 11:26:02.934672] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode8011: invalid model number 'SPDK_Controller' 00:07:25.595 11:26:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # out='2024/07/15 11:26:02 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[model_number:SPDK_Controller nqn:nqn.2016-06.io.spdk:cnode8011], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid MN SPDK_Controller 00:07:25.595 request: 00:07:25.595 { 00:07:25.595 "method": "nvmf_create_subsystem", 00:07:25.595 "params": { 00:07:25.595 "nqn": "nqn.2016-06.io.spdk:cnode8011", 00:07:25.595 "model_number": "SPDK_Controller\u001f" 00:07:25.595 } 00:07:25.595 } 00:07:25.595 Got JSON-RPC error response 00:07:25.595 GoRPCClient: error on JSON-RPC call' 00:07:25.595 11:26:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@51 -- # [[ 2024/07/15 11:26:02 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[model_number:SPDK_Controller nqn:nqn.2016-06.io.spdk:cnode8011], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid MN SPDK_Controller 00:07:25.595 request: 00:07:25.595 { 00:07:25.595 "method": "nvmf_create_subsystem", 00:07:25.595 "params": { 00:07:25.595 "nqn": "nqn.2016-06.io.spdk:cnode8011", 00:07:25.595 "model_number": "SPDK_Controller\u001f" 00:07:25.595 } 00:07:25.595 } 00:07:25.595 Got JSON-RPC error response 00:07:25.595 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \M\N* ]] 00:07:25.595 11:26:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:07:25.595 11:26:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:07:25.595 11:26:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:07:25.595 11:26:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:07:25.595 11:26:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:07:25.595 11:26:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:07:25.595 11:26:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:25.595 11:26:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 59 00:07:25.595 11:26:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3b' 00:07:25.595 11:26:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=';' 00:07:25.595 11:26:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:25.595 11:26:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:25.595 11:26:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 101 00:07:25.595 11:26:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x65' 00:07:25.595 11:26:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=e 00:07:25.595 11:26:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:25.595 11:26:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:25.595 11:26:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 75 00:07:25.595 11:26:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4b' 00:07:25.595 11:26:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=K 00:07:25.595 11:26:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:25.595 11:26:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:25.595 11:26:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 112 00:07:25.595 11:26:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x70' 00:07:25.595 11:26:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=p 00:07:25.595 11:26:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:25.595 11:26:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:25.595 11:26:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 37 00:07:25.595 11:26:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x25' 00:07:25.595 11:26:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=% 00:07:25.595 11:26:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:25.595 11:26:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:25.595 11:26:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 47 00:07:25.595 11:26:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2f' 00:07:25.595 11:26:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=/ 00:07:25.595 11:26:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:25.595 11:26:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:25.595 11:26:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 45 00:07:25.595 11:26:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2d' 00:07:25.595 11:26:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=- 00:07:25.595 11:26:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:25.595 11:26:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:25.595 11:26:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 50 00:07:25.595 11:26:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x32' 00:07:25.595 11:26:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=2 00:07:25.595 11:26:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:25.595 11:26:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:25.595 11:26:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 91 00:07:25.595 11:26:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5b' 00:07:25.595 11:26:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='[' 00:07:25.595 11:26:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:25.595 11:26:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:25.595 11:26:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 43 00:07:25.596 11:26:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2b' 00:07:25.596 11:26:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=+ 00:07:25.596 11:26:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:25.596 11:26:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:25.596 11:26:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 37 00:07:25.596 11:26:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x25' 00:07:25.596 11:26:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=% 00:07:25.596 11:26:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:25.596 11:26:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:25.596 11:26:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 72 00:07:25.596 11:26:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x48' 00:07:25.596 11:26:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=H 00:07:25.596 11:26:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:25.596 11:26:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:25.596 11:26:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 37 00:07:25.596 11:26:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x25' 00:07:25.596 11:26:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=% 00:07:25.596 11:26:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:25.596 11:26:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:25.596 11:26:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 118 00:07:25.596 11:26:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x76' 00:07:25.596 11:26:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=v 00:07:25.596 11:26:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:25.596 11:26:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:25.596 11:26:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 67 00:07:25.596 11:26:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x43' 00:07:25.596 11:26:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=C 00:07:25.596 11:26:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:25.596 11:26:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:25.596 11:26:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 84 00:07:25.596 11:26:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x54' 00:07:25.596 11:26:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=T 00:07:25.596 11:26:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:25.596 11:26:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:25.596 11:26:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 70 00:07:25.596 11:26:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x46' 00:07:25.596 11:26:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=F 00:07:25.596 11:26:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:25.596 11:26:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:25.596 11:26:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 56 00:07:25.596 11:26:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x38' 00:07:25.596 11:26:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=8 00:07:25.596 11:26:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:25.596 11:26:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:25.596 11:26:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 50 00:07:25.596 11:26:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x32' 00:07:25.596 11:26:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=2 00:07:25.596 11:26:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:25.596 11:26:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:25.596 11:26:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 48 00:07:25.596 11:26:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x30' 00:07:25.596 11:26:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=0 00:07:25.596 11:26:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:25.596 11:26:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:25.596 11:26:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 104 00:07:25.596 11:26:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x68' 00:07:25.596 11:26:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=h 00:07:25.596 11:26:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:25.596 11:26:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:25.596 11:26:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@28 -- # [[ ; == \- ]] 00:07:25.596 11:26:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@31 -- # echo ';eKp%/-2[+%H%vCTF820h' 00:07:25.596 11:26:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem -s ';eKp%/-2[+%H%vCTF820h' nqn.2016-06.io.spdk:cnode9561 00:07:25.857 [2024-07-15 11:26:03.259225] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode9561: invalid serial number ';eKp%/-2[+%H%vCTF820h' 00:07:25.857 11:26:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@54 -- # out='2024/07/15 11:26:03 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode9561 serial_number:;eKp%/-2[+%H%vCTF820h], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid SN ;eKp%/-2[+%H%vCTF820h 00:07:25.857 request: 00:07:25.857 { 00:07:25.857 "method": "nvmf_create_subsystem", 00:07:25.857 "params": { 00:07:25.857 "nqn": "nqn.2016-06.io.spdk:cnode9561", 00:07:25.857 "serial_number": ";eKp%/-2[+%H%vCTF820h" 00:07:25.857 } 00:07:25.857 } 00:07:25.857 Got JSON-RPC error response 00:07:25.857 GoRPCClient: error on JSON-RPC call' 00:07:25.857 11:26:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@55 -- # [[ 2024/07/15 11:26:03 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode9561 serial_number:;eKp%/-2[+%H%vCTF820h], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid SN ;eKp%/-2[+%H%vCTF820h 00:07:25.857 request: 00:07:25.857 { 00:07:25.857 "method": "nvmf_create_subsystem", 00:07:25.857 "params": { 00:07:25.857 "nqn": "nqn.2016-06.io.spdk:cnode9561", 00:07:25.857 "serial_number": ";eKp%/-2[+%H%vCTF820h" 00:07:25.857 } 00:07:25.857 } 00:07:25.857 Got JSON-RPC error response 00:07:25.857 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \S\N* ]] 00:07:25.857 11:26:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@58 -- # gen_random_s 41 00:07:25.857 11:26:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@19 -- # local length=41 ll 00:07:25.857 11:26:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:07:25.857 11:26:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:07:25.858 11:26:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:07:25.858 11:26:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:07:25.858 11:26:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:25.858 11:26:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 98 00:07:25.858 11:26:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x62' 00:07:25.858 11:26:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=b 00:07:25.858 11:26:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:25.858 11:26:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:25.858 11:26:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 76 00:07:25.858 11:26:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4c' 00:07:25.858 11:26:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=L 00:07:25.858 11:26:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:25.858 11:26:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:25.858 11:26:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 115 00:07:25.858 11:26:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x73' 00:07:25.858 11:26:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=s 00:07:25.858 11:26:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:25.858 11:26:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:25.858 11:26:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 82 00:07:25.858 11:26:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x52' 00:07:25.858 11:26:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=R 00:07:25.858 11:26:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:25.858 11:26:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:25.858 11:26:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 49 00:07:25.858 11:26:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x31' 00:07:25.858 11:26:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=1 00:07:25.858 11:26:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:25.858 11:26:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:25.858 11:26:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 50 00:07:25.858 11:26:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x32' 00:07:25.858 11:26:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=2 00:07:25.858 11:26:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:25.858 11:26:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:25.858 11:26:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 54 00:07:25.858 11:26:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x36' 00:07:25.858 11:26:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=6 00:07:25.858 11:26:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:25.858 11:26:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:25.858 11:26:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 98 00:07:25.858 11:26:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x62' 00:07:25.858 11:26:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=b 00:07:25.858 11:26:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:25.858 11:26:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:25.858 11:26:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 41 00:07:25.858 11:26:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x29' 00:07:25.858 11:26:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=')' 00:07:25.858 11:26:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:25.858 11:26:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:25.858 11:26:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 45 00:07:25.858 11:26:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2d' 00:07:25.858 11:26:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=- 00:07:25.858 11:26:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:25.858 11:26:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:25.858 11:26:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 116 00:07:25.858 11:26:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x74' 00:07:25.858 11:26:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=t 00:07:25.858 11:26:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:25.858 11:26:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:25.858 11:26:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 48 00:07:25.858 11:26:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x30' 00:07:25.858 11:26:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=0 00:07:25.858 11:26:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:25.858 11:26:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:26.147 11:26:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 79 00:07:26.147 11:26:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4f' 00:07:26.147 11:26:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=O 00:07:26.147 11:26:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:26.147 11:26:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:26.147 11:26:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 96 00:07:26.147 11:26:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x60' 00:07:26.147 11:26:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='`' 00:07:26.147 11:26:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:26.147 11:26:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:26.147 11:26:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 35 00:07:26.147 11:26:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x23' 00:07:26.147 11:26:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='#' 00:07:26.147 11:26:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:26.147 11:26:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:26.147 11:26:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 32 00:07:26.147 11:26:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x20' 00:07:26.147 11:26:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=' ' 00:07:26.147 11:26:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:26.147 11:26:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:26.147 11:26:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 58 00:07:26.147 11:26:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3a' 00:07:26.147 11:26:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=: 00:07:26.147 11:26:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:26.147 11:26:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:26.147 11:26:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 96 00:07:26.147 11:26:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x60' 00:07:26.147 11:26:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='`' 00:07:26.147 11:26:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:26.147 11:26:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:26.147 11:26:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 40 00:07:26.147 11:26:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x28' 00:07:26.147 11:26:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='(' 00:07:26.148 11:26:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:26.148 11:26:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:26.148 11:26:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 113 00:07:26.148 11:26:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x71' 00:07:26.148 11:26:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=q 00:07:26.148 11:26:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:26.148 11:26:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:26.148 11:26:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 70 00:07:26.148 11:26:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x46' 00:07:26.148 11:26:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=F 00:07:26.148 11:26:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:26.148 11:26:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:26.148 11:26:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 41 00:07:26.148 11:26:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x29' 00:07:26.148 11:26:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=')' 00:07:26.148 11:26:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:26.148 11:26:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:26.148 11:26:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 61 00:07:26.148 11:26:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3d' 00:07:26.148 11:26:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+== 00:07:26.148 11:26:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:26.148 11:26:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:26.148 11:26:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 123 00:07:26.148 11:26:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7b' 00:07:26.148 11:26:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='{' 00:07:26.148 11:26:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:26.148 11:26:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:26.148 11:26:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 114 00:07:26.148 11:26:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x72' 00:07:26.148 11:26:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=r 00:07:26.148 11:26:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:26.148 11:26:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:26.148 11:26:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 56 00:07:26.148 11:26:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x38' 00:07:26.148 11:26:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=8 00:07:26.148 11:26:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:26.148 11:26:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:26.148 11:26:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 77 00:07:26.148 11:26:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4d' 00:07:26.148 11:26:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=M 00:07:26.148 11:26:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:26.148 11:26:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:26.148 11:26:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 66 00:07:26.148 11:26:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x42' 00:07:26.148 11:26:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=B 00:07:26.148 11:26:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:26.148 11:26:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:26.148 11:26:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 71 00:07:26.148 11:26:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x47' 00:07:26.148 11:26:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=G 00:07:26.148 11:26:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:26.148 11:26:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:26.148 11:26:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 38 00:07:26.148 11:26:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x26' 00:07:26.148 11:26:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='&' 00:07:26.148 11:26:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:26.148 11:26:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:26.148 11:26:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 78 00:07:26.148 11:26:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4e' 00:07:26.148 11:26:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=N 00:07:26.148 11:26:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:26.148 11:26:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:26.148 11:26:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 65 00:07:26.148 11:26:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x41' 00:07:26.148 11:26:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=A 00:07:26.148 11:26:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:26.148 11:26:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:26.148 11:26:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 118 00:07:26.148 11:26:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x76' 00:07:26.148 11:26:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=v 00:07:26.148 11:26:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:26.148 11:26:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:26.148 11:26:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 46 00:07:26.148 11:26:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2e' 00:07:26.148 11:26:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=. 00:07:26.148 11:26:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:26.148 11:26:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:26.148 11:26:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 114 00:07:26.148 11:26:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x72' 00:07:26.148 11:26:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=r 00:07:26.148 11:26:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:26.148 11:26:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:26.148 11:26:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 34 00:07:26.148 11:26:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x22' 00:07:26.148 11:26:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='"' 00:07:26.148 11:26:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:26.148 11:26:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:26.148 11:26:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 112 00:07:26.148 11:26:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x70' 00:07:26.148 11:26:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=p 00:07:26.148 11:26:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:26.148 11:26:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:26.148 11:26:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 101 00:07:26.148 11:26:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x65' 00:07:26.148 11:26:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=e 00:07:26.148 11:26:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:26.148 11:26:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:26.148 11:26:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 105 00:07:26.148 11:26:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x69' 00:07:26.148 11:26:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=i 00:07:26.148 11:26:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:26.148 11:26:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:26.148 11:26:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 36 00:07:26.148 11:26:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x24' 00:07:26.148 11:26:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='$' 00:07:26.148 11:26:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:26.148 11:26:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:26.148 11:26:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 54 00:07:26.148 11:26:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x36' 00:07:26.148 11:26:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=6 00:07:26.148 11:26:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:26.148 11:26:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:26.148 11:26:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@28 -- # [[ b == \- ]] 00:07:26.148 11:26:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@31 -- # echo 'bLsR126b)-t0O`# :`(qF)={r8MBG&NAv.r"pei$6' 00:07:26.148 11:26:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem -d 'bLsR126b)-t0O`# :`(qF)={r8MBG&NAv.r"pei$6' nqn.2016-06.io.spdk:cnode12915 00:07:26.419 [2024-07-15 11:26:03.667609] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode12915: invalid model number 'bLsR126b)-t0O`# :`(qF)={r8MBG&NAv.r"pei$6' 00:07:26.420 11:26:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@58 -- # out='2024/07/15 11:26:03 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[model_number:bLsR126b)-t0O`# :`(qF)={r8MBG&NAv.r"pei$6 nqn:nqn.2016-06.io.spdk:cnode12915], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid MN bLsR126b)-t0O`# :`(qF)={r8MBG&NAv.r"pei$6 00:07:26.420 request: 00:07:26.420 { 00:07:26.420 "method": "nvmf_create_subsystem", 00:07:26.420 "params": { 00:07:26.420 "nqn": "nqn.2016-06.io.spdk:cnode12915", 00:07:26.420 "model_number": "bLsR126b)-t0O`# :`(qF)={r8MBG&NAv.r\"pei$6" 00:07:26.420 } 00:07:26.420 } 00:07:26.420 Got JSON-RPC error response 00:07:26.420 GoRPCClient: error on JSON-RPC call' 00:07:26.420 11:26:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@59 -- # [[ 2024/07/15 11:26:03 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[model_number:bLsR126b)-t0O`# :`(qF)={r8MBG&NAv.r"pei$6 nqn:nqn.2016-06.io.spdk:cnode12915], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid MN bLsR126b)-t0O`# :`(qF)={r8MBG&NAv.r"pei$6 00:07:26.420 request: 00:07:26.420 { 00:07:26.420 "method": "nvmf_create_subsystem", 00:07:26.420 "params": { 00:07:26.420 "nqn": "nqn.2016-06.io.spdk:cnode12915", 00:07:26.420 "model_number": "bLsR126b)-t0O`# :`(qF)={r8MBG&NAv.r\"pei$6" 00:07:26.420 } 00:07:26.420 } 00:07:26.420 Got JSON-RPC error response 00:07:26.420 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \M\N* ]] 00:07:26.420 11:26:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport --trtype tcp 00:07:26.678 [2024-07-15 11:26:03.907897] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:26.678 11:26:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a 00:07:26.935 11:26:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@64 -- # [[ tcp == \T\C\P ]] 00:07:26.935 11:26:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@67 -- # echo '' 00:07:26.935 11:26:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@67 -- # head -n 1 00:07:26.935 11:26:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@67 -- # IP= 00:07:26.935 11:26:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t tcp -a '' -s 4421 00:07:27.193 [2024-07-15 11:26:04.498351] nvmf_rpc.c: 804:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2 00:07:27.193 11:26:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@69 -- # out='2024/07/15 11:26:04 error on JSON-RPC call, method: nvmf_subsystem_remove_listener, params: map[listen_address:map[traddr: trsvcid:4421 trtype:tcp] nqn:nqn.2016-06.io.spdk:cnode], err: error received for nvmf_subsystem_remove_listener method, err: Code=-32602 Msg=Invalid parameters 00:07:27.193 request: 00:07:27.193 { 00:07:27.193 "method": "nvmf_subsystem_remove_listener", 00:07:27.193 "params": { 00:07:27.193 "nqn": "nqn.2016-06.io.spdk:cnode", 00:07:27.193 "listen_address": { 00:07:27.193 "trtype": "tcp", 00:07:27.193 "traddr": "", 00:07:27.193 "trsvcid": "4421" 00:07:27.193 } 00:07:27.193 } 00:07:27.193 } 00:07:27.193 Got JSON-RPC error response 00:07:27.193 GoRPCClient: error on JSON-RPC call' 00:07:27.193 11:26:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@70 -- # [[ 2024/07/15 11:26:04 error on JSON-RPC call, method: nvmf_subsystem_remove_listener, params: map[listen_address:map[traddr: trsvcid:4421 trtype:tcp] nqn:nqn.2016-06.io.spdk:cnode], err: error received for nvmf_subsystem_remove_listener method, err: Code=-32602 Msg=Invalid parameters 00:07:27.193 request: 00:07:27.193 { 00:07:27.193 "method": "nvmf_subsystem_remove_listener", 00:07:27.193 "params": { 00:07:27.193 "nqn": "nqn.2016-06.io.spdk:cnode", 00:07:27.193 "listen_address": { 00:07:27.193 "trtype": "tcp", 00:07:27.193 "traddr": "", 00:07:27.193 "trsvcid": "4421" 00:07:27.193 } 00:07:27.193 } 00:07:27.193 } 00:07:27.193 Got JSON-RPC error response 00:07:27.193 GoRPCClient: error on JSON-RPC call != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]] 00:07:27.193 11:26:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@73 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode14925 -i 0 00:07:27.452 [2024-07-15 11:26:04.750514] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode14925: invalid cntlid range [0-65519] 00:07:27.452 11:26:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@73 -- # out='2024/07/15 11:26:04 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[min_cntlid:0 nqn:nqn.2016-06.io.spdk:cnode14925], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [0-65519] 00:07:27.452 request: 00:07:27.452 { 00:07:27.452 "method": "nvmf_create_subsystem", 00:07:27.452 "params": { 00:07:27.452 "nqn": "nqn.2016-06.io.spdk:cnode14925", 00:07:27.452 "min_cntlid": 0 00:07:27.452 } 00:07:27.452 } 00:07:27.452 Got JSON-RPC error response 00:07:27.452 GoRPCClient: error on JSON-RPC call' 00:07:27.452 11:26:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@74 -- # [[ 2024/07/15 11:26:04 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[min_cntlid:0 nqn:nqn.2016-06.io.spdk:cnode14925], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [0-65519] 00:07:27.452 request: 00:07:27.452 { 00:07:27.452 "method": "nvmf_create_subsystem", 00:07:27.452 "params": { 00:07:27.452 "nqn": "nqn.2016-06.io.spdk:cnode14925", 00:07:27.452 "min_cntlid": 0 00:07:27.452 } 00:07:27.452 } 00:07:27.452 Got JSON-RPC error response 00:07:27.452 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:07:27.452 11:26:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@75 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode24986 -i 65520 00:07:27.710 [2024-07-15 11:26:04.998805] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode24986: invalid cntlid range [65520-65519] 00:07:27.710 11:26:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@75 -- # out='2024/07/15 11:26:05 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[min_cntlid:65520 nqn:nqn.2016-06.io.spdk:cnode24986], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [65520-65519] 00:07:27.710 request: 00:07:27.710 { 00:07:27.710 "method": "nvmf_create_subsystem", 00:07:27.710 "params": { 00:07:27.710 "nqn": "nqn.2016-06.io.spdk:cnode24986", 00:07:27.710 "min_cntlid": 65520 00:07:27.710 } 00:07:27.710 } 00:07:27.710 Got JSON-RPC error response 00:07:27.710 GoRPCClient: error on JSON-RPC call' 00:07:27.710 11:26:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@76 -- # [[ 2024/07/15 11:26:05 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[min_cntlid:65520 nqn:nqn.2016-06.io.spdk:cnode24986], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [65520-65519] 00:07:27.710 request: 00:07:27.710 { 00:07:27.710 "method": "nvmf_create_subsystem", 00:07:27.710 "params": { 00:07:27.710 "nqn": "nqn.2016-06.io.spdk:cnode24986", 00:07:27.710 "min_cntlid": 65520 00:07:27.710 } 00:07:27.710 } 00:07:27.710 Got JSON-RPC error response 00:07:27.710 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:07:27.710 11:26:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode32199 -I 0 00:07:27.968 [2024-07-15 11:26:05.263044] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode32199: invalid cntlid range [1-0] 00:07:27.968 11:26:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@77 -- # out='2024/07/15 11:26:05 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[max_cntlid:0 nqn:nqn.2016-06.io.spdk:cnode32199], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [1-0] 00:07:27.968 request: 00:07:27.968 { 00:07:27.968 "method": "nvmf_create_subsystem", 00:07:27.968 "params": { 00:07:27.968 "nqn": "nqn.2016-06.io.spdk:cnode32199", 00:07:27.968 "max_cntlid": 0 00:07:27.968 } 00:07:27.968 } 00:07:27.968 Got JSON-RPC error response 00:07:27.968 GoRPCClient: error on JSON-RPC call' 00:07:27.968 11:26:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@78 -- # [[ 2024/07/15 11:26:05 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[max_cntlid:0 nqn:nqn.2016-06.io.spdk:cnode32199], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [1-0] 00:07:27.968 request: 00:07:27.968 { 00:07:27.968 "method": "nvmf_create_subsystem", 00:07:27.968 "params": { 00:07:27.968 "nqn": "nqn.2016-06.io.spdk:cnode32199", 00:07:27.968 "max_cntlid": 0 00:07:27.968 } 00:07:27.968 } 00:07:27.968 Got JSON-RPC error response 00:07:27.968 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:07:27.968 11:26:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode20356 -I 65520 00:07:28.226 [2024-07-15 11:26:05.555325] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode20356: invalid cntlid range [1-65520] 00:07:28.226 11:26:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@79 -- # out='2024/07/15 11:26:05 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[max_cntlid:65520 nqn:nqn.2016-06.io.spdk:cnode20356], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [1-65520] 00:07:28.226 request: 00:07:28.226 { 00:07:28.226 "method": "nvmf_create_subsystem", 00:07:28.226 "params": { 00:07:28.226 "nqn": "nqn.2016-06.io.spdk:cnode20356", 00:07:28.226 "max_cntlid": 65520 00:07:28.226 } 00:07:28.226 } 00:07:28.226 Got JSON-RPC error response 00:07:28.226 GoRPCClient: error on JSON-RPC call' 00:07:28.226 11:26:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@80 -- # [[ 2024/07/15 11:26:05 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[max_cntlid:65520 nqn:nqn.2016-06.io.spdk:cnode20356], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [1-65520] 00:07:28.226 request: 00:07:28.226 { 00:07:28.226 "method": "nvmf_create_subsystem", 00:07:28.226 "params": { 00:07:28.226 "nqn": "nqn.2016-06.io.spdk:cnode20356", 00:07:28.226 "max_cntlid": 65520 00:07:28.226 } 00:07:28.226 } 00:07:28.226 Got JSON-RPC error response 00:07:28.226 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:07:28.226 11:26:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode69 -i 6 -I 5 00:07:28.792 [2024-07-15 11:26:05.963730] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode69: invalid cntlid range [6-5] 00:07:28.792 11:26:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@83 -- # out='2024/07/15 11:26:05 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[max_cntlid:5 min_cntlid:6 nqn:nqn.2016-06.io.spdk:cnode69], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [6-5] 00:07:28.792 request: 00:07:28.792 { 00:07:28.792 "method": "nvmf_create_subsystem", 00:07:28.792 "params": { 00:07:28.792 "nqn": "nqn.2016-06.io.spdk:cnode69", 00:07:28.792 "min_cntlid": 6, 00:07:28.792 "max_cntlid": 5 00:07:28.792 } 00:07:28.792 } 00:07:28.792 Got JSON-RPC error response 00:07:28.792 GoRPCClient: error on JSON-RPC call' 00:07:28.792 11:26:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@84 -- # [[ 2024/07/15 11:26:05 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[max_cntlid:5 min_cntlid:6 nqn:nqn.2016-06.io.spdk:cnode69], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [6-5] 00:07:28.792 request: 00:07:28.792 { 00:07:28.792 "method": "nvmf_create_subsystem", 00:07:28.792 "params": { 00:07:28.792 "nqn": "nqn.2016-06.io.spdk:cnode69", 00:07:28.792 "min_cntlid": 6, 00:07:28.792 "max_cntlid": 5 00:07:28.792 } 00:07:28.792 } 00:07:28.792 Got JSON-RPC error response 00:07:28.792 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:07:28.792 11:26:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@87 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar 00:07:28.792 11:26:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@87 -- # out='request: 00:07:28.792 { 00:07:28.792 "name": "foobar", 00:07:28.792 "method": "nvmf_delete_target", 00:07:28.792 "req_id": 1 00:07:28.792 } 00:07:28.792 Got JSON-RPC error response 00:07:28.792 response: 00:07:28.792 { 00:07:28.792 "code": -32602, 00:07:28.792 "message": "The specified target doesn'\''t exist, cannot delete it." 00:07:28.792 }' 00:07:28.792 11:26:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@88 -- # [[ request: 00:07:28.792 { 00:07:28.792 "name": "foobar", 00:07:28.792 "method": "nvmf_delete_target", 00:07:28.792 "req_id": 1 00:07:28.792 } 00:07:28.792 Got JSON-RPC error response 00:07:28.792 response: 00:07:28.792 { 00:07:28.792 "code": -32602, 00:07:28.792 "message": "The specified target doesn't exist, cannot delete it." 00:07:28.792 } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]] 00:07:28.792 11:26:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:07:28.792 11:26:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@91 -- # nvmftestfini 00:07:28.792 11:26:06 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:28.792 11:26:06 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@117 -- # sync 00:07:28.792 11:26:06 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:28.792 11:26:06 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@120 -- # set +e 00:07:28.792 11:26:06 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:28.792 11:26:06 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:28.792 rmmod nvme_tcp 00:07:28.792 rmmod nvme_fabrics 00:07:28.792 rmmod nvme_keyring 00:07:28.792 11:26:06 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:28.792 11:26:06 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@124 -- # set -e 00:07:28.792 11:26:06 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@125 -- # return 0 00:07:28.792 11:26:06 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@489 -- # '[' -n 67789 ']' 00:07:28.793 11:26:06 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@490 -- # killprocess 67789 00:07:28.793 11:26:06 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@948 -- # '[' -z 67789 ']' 00:07:28.793 11:26:06 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@952 -- # kill -0 67789 00:07:28.793 11:26:06 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@953 -- # uname 00:07:28.793 11:26:06 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:28.793 11:26:06 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 67789 00:07:28.793 11:26:06 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:28.793 11:26:06 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:28.793 killing process with pid 67789 00:07:28.793 11:26:06 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@966 -- # echo 'killing process with pid 67789' 00:07:28.793 11:26:06 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@967 -- # kill 67789 00:07:28.793 11:26:06 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@972 -- # wait 67789 00:07:29.050 11:26:06 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:29.051 11:26:06 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:07:29.051 11:26:06 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:07:29.051 11:26:06 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:29.051 11:26:06 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:29.051 11:26:06 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:29.051 11:26:06 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:29.051 11:26:06 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:29.051 11:26:06 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:07:29.051 ************************************ 00:07:29.051 END TEST nvmf_invalid 00:07:29.051 ************************************ 00:07:29.051 00:07:29.051 real 0m5.720s 00:07:29.051 user 0m23.296s 00:07:29.051 sys 0m1.149s 00:07:29.051 11:26:06 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:29.051 11:26:06 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:07:29.051 11:26:06 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:07:29.051 11:26:06 nvmf_tcp -- nvmf/nvmf.sh@31 -- # run_test nvmf_abort /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort.sh --transport=tcp 00:07:29.051 11:26:06 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:29.051 11:26:06 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:29.051 11:26:06 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:29.051 ************************************ 00:07:29.051 START TEST nvmf_abort 00:07:29.051 ************************************ 00:07:29.051 11:26:06 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort.sh --transport=tcp 00:07:29.308 * Looking for test storage... 00:07:29.308 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:07:29.308 11:26:06 nvmf_tcp.nvmf_abort -- target/abort.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:07:29.308 11:26:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:07:29.308 11:26:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:29.308 11:26:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:29.308 11:26:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:29.308 11:26:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:29.308 11:26:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:29.308 11:26:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:29.308 11:26:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:29.308 11:26:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:29.308 11:26:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:29.308 11:26:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:29.308 11:26:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:891080d4-f96c-4735-b9e2-e3ce9892e421 00:07:29.308 11:26:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=891080d4-f96c-4735-b9e2-e3ce9892e421 00:07:29.308 11:26:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:29.308 11:26:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:29.308 11:26:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:07:29.308 11:26:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:29.308 11:26:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:29.308 11:26:06 nvmf_tcp.nvmf_abort -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:29.309 11:26:06 nvmf_tcp.nvmf_abort -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:29.309 11:26:06 nvmf_tcp.nvmf_abort -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:29.309 11:26:06 nvmf_tcp.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:29.309 11:26:06 nvmf_tcp.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:29.309 11:26:06 nvmf_tcp.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:29.309 11:26:06 nvmf_tcp.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:07:29.309 11:26:06 nvmf_tcp.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:29.309 11:26:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@47 -- # : 0 00:07:29.309 11:26:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:29.309 11:26:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:29.309 11:26:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:29.309 11:26:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:29.309 11:26:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:29.309 11:26:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:29.309 11:26:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:29.309 11:26:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:29.309 11:26:06 nvmf_tcp.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:29.309 11:26:06 nvmf_tcp.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:07:29.309 11:26:06 nvmf_tcp.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:07:29.309 11:26:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:29.309 11:26:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:29.309 11:26:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:29.309 11:26:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:29.309 11:26:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:29.309 11:26:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:29.309 11:26:06 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:29.309 11:26:06 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:29.309 11:26:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:07:29.309 11:26:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:07:29.309 11:26:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:07:29.309 11:26:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:07:29.309 11:26:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:07:29.309 11:26:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@432 -- # nvmf_veth_init 00:07:29.309 11:26:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:29.309 11:26:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:29.309 11:26:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:07:29.309 11:26:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:07:29.309 11:26:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:07:29.309 11:26:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:07:29.309 11:26:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:07:29.309 11:26:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:29.309 11:26:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:07:29.309 11:26:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:07:29.309 11:26:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:07:29.309 11:26:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:07:29.309 11:26:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:07:29.309 11:26:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:07:29.309 Cannot find device "nvmf_tgt_br" 00:07:29.309 11:26:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@155 -- # true 00:07:29.309 11:26:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:07:29.309 Cannot find device "nvmf_tgt_br2" 00:07:29.309 11:26:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@156 -- # true 00:07:29.309 11:26:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:07:29.309 11:26:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:07:29.309 Cannot find device "nvmf_tgt_br" 00:07:29.309 11:26:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@158 -- # true 00:07:29.309 11:26:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:07:29.309 Cannot find device "nvmf_tgt_br2" 00:07:29.309 11:26:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@159 -- # true 00:07:29.309 11:26:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:07:29.309 11:26:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:07:29.309 11:26:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:07:29.309 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:29.309 11:26:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@162 -- # true 00:07:29.309 11:26:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:07:29.309 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:29.309 11:26:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@163 -- # true 00:07:29.309 11:26:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:07:29.309 11:26:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:07:29.309 11:26:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:07:29.309 11:26:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:07:29.309 11:26:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:07:29.309 11:26:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:07:29.309 11:26:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:07:29.567 11:26:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:07:29.567 11:26:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:07:29.567 11:26:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:07:29.567 11:26:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:07:29.567 11:26:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:07:29.567 11:26:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:07:29.567 11:26:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:07:29.567 11:26:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:07:29.567 11:26:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:07:29.567 11:26:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:07:29.567 11:26:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:07:29.567 11:26:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:07:29.567 11:26:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:07:29.567 11:26:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:07:29.567 11:26:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:07:29.567 11:26:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:07:29.567 11:26:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:07:29.567 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:29.567 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.069 ms 00:07:29.567 00:07:29.567 --- 10.0.0.2 ping statistics --- 00:07:29.567 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:29.567 rtt min/avg/max/mdev = 0.069/0.069/0.069/0.000 ms 00:07:29.567 11:26:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:07:29.567 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:07:29.567 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.036 ms 00:07:29.567 00:07:29.567 --- 10.0.0.3 ping statistics --- 00:07:29.567 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:29.567 rtt min/avg/max/mdev = 0.036/0.036/0.036/0.000 ms 00:07:29.567 11:26:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:07:29.567 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:29.567 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.018 ms 00:07:29.567 00:07:29.567 --- 10.0.0.1 ping statistics --- 00:07:29.567 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:29.567 rtt min/avg/max/mdev = 0.018/0.018/0.018/0.000 ms 00:07:29.567 11:26:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:29.567 11:26:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@433 -- # return 0 00:07:29.567 11:26:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:29.567 11:26:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:29.567 11:26:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:29.567 11:26:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:29.567 11:26:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:29.568 11:26:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:29.568 11:26:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:29.568 11:26:06 nvmf_tcp.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:07:29.568 11:26:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:29.568 11:26:06 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:29.568 11:26:06 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:29.568 11:26:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@481 -- # nvmfpid=68299 00:07:29.568 11:26:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@482 -- # waitforlisten 68299 00:07:29.568 11:26:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:07:29.568 11:26:06 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@829 -- # '[' -z 68299 ']' 00:07:29.568 11:26:06 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:29.568 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:29.568 11:26:06 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:29.568 11:26:06 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:29.568 11:26:06 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:29.568 11:26:06 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:29.568 [2024-07-15 11:26:06.969349] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:07:29.568 [2024-07-15 11:26:06.969441] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:29.825 [2024-07-15 11:26:07.107313] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:29.825 [2024-07-15 11:26:07.178098] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:29.825 [2024-07-15 11:26:07.178163] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:29.825 [2024-07-15 11:26:07.178176] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:29.825 [2024-07-15 11:26:07.178186] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:29.825 [2024-07-15 11:26:07.178195] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:29.825 [2024-07-15 11:26:07.178374] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:29.825 [2024-07-15 11:26:07.178795] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:29.825 [2024-07-15 11:26:07.178897] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:30.757 11:26:07 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:30.757 11:26:07 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@862 -- # return 0 00:07:30.757 11:26:07 nvmf_tcp.nvmf_abort -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:30.757 11:26:07 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:30.757 11:26:07 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:30.757 11:26:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:30.757 11:26:08 nvmf_tcp.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:07:30.757 11:26:08 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:30.757 11:26:08 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:30.757 [2024-07-15 11:26:08.030883] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:30.757 11:26:08 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:30.757 11:26:08 nvmf_tcp.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:07:30.757 11:26:08 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:30.757 11:26:08 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:30.757 Malloc0 00:07:30.757 11:26:08 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:30.757 11:26:08 nvmf_tcp.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:07:30.757 11:26:08 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:30.757 11:26:08 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:30.757 Delay0 00:07:30.757 11:26:08 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:30.757 11:26:08 nvmf_tcp.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:07:30.757 11:26:08 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:30.757 11:26:08 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:30.758 11:26:08 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:30.758 11:26:08 nvmf_tcp.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:07:30.758 11:26:08 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:30.758 11:26:08 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:30.758 11:26:08 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:30.758 11:26:08 nvmf_tcp.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:07:30.758 11:26:08 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:30.758 11:26:08 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:30.758 [2024-07-15 11:26:08.092720] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:30.758 11:26:08 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:30.758 11:26:08 nvmf_tcp.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:30.758 11:26:08 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:30.758 11:26:08 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:30.758 11:26:08 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:30.758 11:26:08 nvmf_tcp.nvmf_abort -- target/abort.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:07:31.015 [2024-07-15 11:26:08.266482] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:07:32.914 Initializing NVMe Controllers 00:07:32.914 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:07:32.914 controller IO queue size 128 less than required 00:07:32.914 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:07:32.914 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:07:32.914 Initialization complete. Launching workers. 00:07:32.914 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 125, failed: 32827 00:07:32.914 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 32890, failed to submit 62 00:07:32.914 success 32831, unsuccess 59, failed 0 00:07:32.914 11:26:10 nvmf_tcp.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:07:32.914 11:26:10 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:32.914 11:26:10 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:32.914 11:26:10 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:32.914 11:26:10 nvmf_tcp.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:07:32.914 11:26:10 nvmf_tcp.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:07:32.914 11:26:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:32.914 11:26:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@117 -- # sync 00:07:32.914 11:26:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:32.914 11:26:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@120 -- # set +e 00:07:32.914 11:26:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:32.914 11:26:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:32.914 rmmod nvme_tcp 00:07:32.914 rmmod nvme_fabrics 00:07:32.914 rmmod nvme_keyring 00:07:33.173 11:26:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:33.173 11:26:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@124 -- # set -e 00:07:33.173 11:26:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@125 -- # return 0 00:07:33.173 11:26:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@489 -- # '[' -n 68299 ']' 00:07:33.173 11:26:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@490 -- # killprocess 68299 00:07:33.173 11:26:10 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@948 -- # '[' -z 68299 ']' 00:07:33.173 11:26:10 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@952 -- # kill -0 68299 00:07:33.173 11:26:10 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@953 -- # uname 00:07:33.173 11:26:10 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:33.173 11:26:10 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 68299 00:07:33.173 11:26:10 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:07:33.173 11:26:10 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:07:33.173 killing process with pid 68299 00:07:33.173 11:26:10 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@966 -- # echo 'killing process with pid 68299' 00:07:33.173 11:26:10 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@967 -- # kill 68299 00:07:33.173 11:26:10 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@972 -- # wait 68299 00:07:33.173 11:26:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:33.173 11:26:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:07:33.173 11:26:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:07:33.173 11:26:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:33.173 11:26:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:33.173 11:26:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:33.173 11:26:10 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:33.173 11:26:10 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:33.173 11:26:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:07:33.173 00:07:33.173 real 0m4.164s 00:07:33.173 user 0m12.187s 00:07:33.173 sys 0m0.935s 00:07:33.173 11:26:10 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:33.173 11:26:10 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:33.173 ************************************ 00:07:33.173 END TEST nvmf_abort 00:07:33.173 ************************************ 00:07:33.431 11:26:10 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:07:33.431 11:26:10 nvmf_tcp -- nvmf/nvmf.sh@32 -- # run_test nvmf_ns_hotplug_stress /home/vagrant/spdk_repo/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:07:33.431 11:26:10 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:33.431 11:26:10 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:33.431 11:26:10 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:33.431 ************************************ 00:07:33.431 START TEST nvmf_ns_hotplug_stress 00:07:33.431 ************************************ 00:07:33.431 11:26:10 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:07:33.431 * Looking for test storage... 00:07:33.431 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:07:33.431 11:26:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:07:33.431 11:26:10 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:07:33.431 11:26:10 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:33.431 11:26:10 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:33.431 11:26:10 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:33.431 11:26:10 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:33.431 11:26:10 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:33.431 11:26:10 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:33.431 11:26:10 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:33.431 11:26:10 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:33.431 11:26:10 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:33.431 11:26:10 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:33.431 11:26:10 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:891080d4-f96c-4735-b9e2-e3ce9892e421 00:07:33.431 11:26:10 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=891080d4-f96c-4735-b9e2-e3ce9892e421 00:07:33.431 11:26:10 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:33.431 11:26:10 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:33.431 11:26:10 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:07:33.431 11:26:10 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:33.431 11:26:10 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:33.431 11:26:10 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:33.431 11:26:10 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:33.431 11:26:10 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:33.432 11:26:10 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:33.432 11:26:10 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:33.432 11:26:10 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:33.432 11:26:10 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:07:33.432 11:26:10 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:33.432 11:26:10 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@47 -- # : 0 00:07:33.432 11:26:10 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:33.432 11:26:10 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:33.432 11:26:10 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:33.432 11:26:10 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:33.432 11:26:10 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:33.432 11:26:10 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:33.432 11:26:10 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:33.432 11:26:10 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:33.432 11:26:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:33.432 11:26:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:07:33.432 11:26:10 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:33.432 11:26:10 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:33.432 11:26:10 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:33.432 11:26:10 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:33.432 11:26:10 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:33.432 11:26:10 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:33.432 11:26:10 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:33.432 11:26:10 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:33.432 11:26:10 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:07:33.432 11:26:10 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:07:33.432 11:26:10 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:07:33.432 11:26:10 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:07:33.432 11:26:10 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:07:33.432 11:26:10 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@432 -- # nvmf_veth_init 00:07:33.432 11:26:10 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:33.432 11:26:10 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:33.432 11:26:10 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:07:33.432 11:26:10 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:07:33.432 11:26:10 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:07:33.432 11:26:10 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:07:33.432 11:26:10 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:07:33.432 11:26:10 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:33.432 11:26:10 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:07:33.432 11:26:10 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:07:33.432 11:26:10 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:07:33.432 11:26:10 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:07:33.432 11:26:10 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:07:33.432 11:26:10 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:07:33.432 Cannot find device "nvmf_tgt_br" 00:07:33.432 11:26:10 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@155 -- # true 00:07:33.432 11:26:10 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:07:33.432 Cannot find device "nvmf_tgt_br2" 00:07:33.432 11:26:10 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@156 -- # true 00:07:33.432 11:26:10 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:07:33.432 11:26:10 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:07:33.432 Cannot find device "nvmf_tgt_br" 00:07:33.432 11:26:10 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@158 -- # true 00:07:33.432 11:26:10 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:07:33.432 Cannot find device "nvmf_tgt_br2" 00:07:33.432 11:26:10 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@159 -- # true 00:07:33.432 11:26:10 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:07:33.432 11:26:10 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:07:33.432 11:26:10 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:07:33.432 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:33.432 11:26:10 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@162 -- # true 00:07:33.432 11:26:10 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:07:33.432 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:33.432 11:26:10 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@163 -- # true 00:07:33.432 11:26:10 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:07:33.432 11:26:10 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:07:33.432 11:26:10 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:07:33.698 11:26:10 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:07:33.698 11:26:10 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:07:33.698 11:26:10 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:07:33.698 11:26:10 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:07:33.698 11:26:10 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:07:33.698 11:26:10 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:07:33.698 11:26:10 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:07:33.698 11:26:10 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:07:33.698 11:26:10 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:07:33.698 11:26:10 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:07:33.698 11:26:10 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:07:33.698 11:26:10 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:07:33.698 11:26:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:07:33.698 11:26:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:07:33.698 11:26:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:07:33.698 11:26:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:07:33.698 11:26:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:07:33.698 11:26:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:07:33.698 11:26:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:07:33.698 11:26:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:07:33.698 11:26:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:07:33.698 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:33.698 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.117 ms 00:07:33.698 00:07:33.698 --- 10.0.0.2 ping statistics --- 00:07:33.698 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:33.698 rtt min/avg/max/mdev = 0.117/0.117/0.117/0.000 ms 00:07:33.698 11:26:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:07:33.698 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:07:33.698 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.065 ms 00:07:33.699 00:07:33.699 --- 10.0.0.3 ping statistics --- 00:07:33.699 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:33.699 rtt min/avg/max/mdev = 0.065/0.065/0.065/0.000 ms 00:07:33.699 11:26:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:07:33.699 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:33.699 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.027 ms 00:07:33.699 00:07:33.699 --- 10.0.0.1 ping statistics --- 00:07:33.699 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:33.699 rtt min/avg/max/mdev = 0.027/0.027/0.027/0.000 ms 00:07:33.699 11:26:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:33.699 11:26:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@433 -- # return 0 00:07:33.699 11:26:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:33.699 11:26:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:33.699 11:26:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:33.699 11:26:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:33.699 11:26:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:33.699 11:26:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:33.699 11:26:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:33.699 11:26:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:07:33.699 11:26:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:33.699 11:26:11 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:33.699 11:26:11 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:07:33.699 11:26:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@481 -- # nvmfpid=68563 00:07:33.699 11:26:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:07:33.699 11:26:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # waitforlisten 68563 00:07:33.699 11:26:11 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@829 -- # '[' -z 68563 ']' 00:07:33.699 11:26:11 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:33.699 11:26:11 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:33.699 11:26:11 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:33.699 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:33.699 11:26:11 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:33.699 11:26:11 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:07:33.699 [2024-07-15 11:26:11.169658] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:07:33.699 [2024-07-15 11:26:11.169769] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:33.956 [2024-07-15 11:26:11.305456] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:33.956 [2024-07-15 11:26:11.364723] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:33.956 [2024-07-15 11:26:11.364779] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:33.956 [2024-07-15 11:26:11.364791] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:33.956 [2024-07-15 11:26:11.364799] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:33.956 [2024-07-15 11:26:11.364806] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:33.956 [2024-07-15 11:26:11.364902] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:33.956 [2024-07-15 11:26:11.365481] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:33.956 [2024-07-15 11:26:11.365514] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:34.889 11:26:12 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:34.889 11:26:12 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@862 -- # return 0 00:07:34.890 11:26:12 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:34.890 11:26:12 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:34.890 11:26:12 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:07:34.890 11:26:12 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:34.890 11:26:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:07:34.890 11:26:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:07:35.147 [2024-07-15 11:26:12.447196] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:35.147 11:26:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:07:35.405 11:26:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:35.970 [2024-07-15 11:26:13.153190] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:35.970 11:26:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:35.970 11:26:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:07:36.537 Malloc0 00:07:36.537 11:26:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:07:36.794 Delay0 00:07:36.795 11:26:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:37.053 11:26:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:07:37.311 NULL1 00:07:37.311 11:26:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:07:37.568 11:26:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:07:37.568 11:26:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=68695 00:07:37.568 11:26:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 68695 00:07:37.568 11:26:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:38.940 Read completed with error (sct=0, sc=11) 00:07:38.940 11:26:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:38.940 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:38.940 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:38.940 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:38.940 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:38.940 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:38.940 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:38.940 11:26:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:07:38.940 11:26:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:07:39.198 true 00:07:39.456 11:26:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 68695 00:07:39.456 11:26:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:40.022 11:26:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:40.280 11:26:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:07:40.280 11:26:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:07:40.538 true 00:07:40.538 11:26:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 68695 00:07:40.538 11:26:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:41.103 11:26:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:41.103 11:26:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:07:41.103 11:26:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:07:41.362 true 00:07:41.362 11:26:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 68695 00:07:41.362 11:26:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:41.621 11:26:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:41.880 11:26:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:07:41.880 11:26:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:07:42.138 true 00:07:42.138 11:26:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 68695 00:07:42.138 11:26:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:43.074 11:26:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:43.332 11:26:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:07:43.332 11:26:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:07:43.591 true 00:07:43.591 11:26:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 68695 00:07:43.591 11:26:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:43.850 11:26:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:44.108 11:26:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:07:44.108 11:26:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:07:44.367 true 00:07:44.367 11:26:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 68695 00:07:44.367 11:26:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:44.625 11:26:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:44.884 11:26:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:07:44.884 11:26:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:07:45.143 true 00:07:45.143 11:26:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 68695 00:07:45.143 11:26:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:46.077 11:26:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:46.335 11:26:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:07:46.335 11:26:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:07:46.593 true 00:07:46.593 11:26:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 68695 00:07:46.593 11:26:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:46.851 11:26:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:47.109 11:26:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:07:47.109 11:26:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:07:47.367 true 00:07:47.367 11:26:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 68695 00:07:47.367 11:26:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:47.626 11:26:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:47.918 11:26:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:07:47.918 11:26:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:07:48.175 true 00:07:48.175 11:26:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 68695 00:07:48.176 11:26:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:49.109 11:26:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:49.109 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:49.368 11:26:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:07:49.368 11:26:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:07:49.626 true 00:07:49.626 11:26:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 68695 00:07:49.626 11:26:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:49.884 11:26:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:50.142 11:26:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:07:50.142 11:26:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:07:50.400 true 00:07:50.400 11:26:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 68695 00:07:50.400 11:26:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:50.659 11:26:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:50.917 11:26:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:07:50.917 11:26:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:07:51.176 true 00:07:51.176 11:26:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 68695 00:07:51.176 11:26:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:52.110 11:26:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:52.368 11:26:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:07:52.368 11:26:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:07:52.626 true 00:07:52.626 11:26:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 68695 00:07:52.626 11:26:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:52.885 11:26:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:53.143 11:26:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:07:53.143 11:26:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:07:53.402 true 00:07:53.402 11:26:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 68695 00:07:53.402 11:26:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:53.660 11:26:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:53.918 11:26:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:07:53.918 11:26:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:07:54.176 true 00:07:54.176 11:26:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 68695 00:07:54.176 11:26:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:55.110 11:26:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:55.368 11:26:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:07:55.368 11:26:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:07:55.625 true 00:07:55.625 11:26:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 68695 00:07:55.625 11:26:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:55.924 11:26:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:56.179 11:26:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:07:56.179 11:26:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:07:56.435 true 00:07:56.435 11:26:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 68695 00:07:56.435 11:26:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:56.695 11:26:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:57.257 11:26:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:07:57.257 11:26:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:07:57.513 true 00:07:57.513 11:26:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 68695 00:07:57.513 11:26:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:58.077 11:26:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:58.335 11:26:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:07:58.335 11:26:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:07:58.901 true 00:07:58.901 11:26:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 68695 00:07:58.901 11:26:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:59.162 11:26:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:59.420 11:26:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:07:59.420 11:26:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:07:59.678 true 00:07:59.678 11:26:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 68695 00:07:59.678 11:26:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:59.937 11:26:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:00.195 11:26:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:08:00.195 11:26:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:08:00.453 true 00:08:00.453 11:26:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 68695 00:08:00.453 11:26:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:00.711 11:26:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:00.969 11:26:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:08:00.969 11:26:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:08:01.227 true 00:08:01.227 11:26:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 68695 00:08:01.227 11:26:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:02.160 11:26:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:02.417 11:26:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:08:02.417 11:26:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:08:02.685 true 00:08:02.685 11:26:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 68695 00:08:02.685 11:26:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:04.089 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:04.089 11:26:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:04.089 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:04.346 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:04.346 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:04.346 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:04.346 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:04.346 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:04.346 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:04.346 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:04.603 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:04.603 11:26:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:08:04.603 11:26:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:08:04.861 true 00:08:04.861 11:26:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 68695 00:08:04.861 11:26:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:05.794 11:26:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:05.794 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:05.794 11:26:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:08:05.794 11:26:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:08:06.050 true 00:08:06.050 11:26:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 68695 00:08:06.050 11:26:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:06.308 11:26:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:06.565 11:26:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:08:06.565 11:26:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:08:06.823 true 00:08:06.823 11:26:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 68695 00:08:06.823 11:26:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:07.756 11:26:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:07.756 Initializing NVMe Controllers 00:08:07.756 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:08:07.756 Controller IO queue size 128, less than required. 00:08:07.756 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:08:07.756 Controller IO queue size 128, less than required. 00:08:07.756 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:08:07.756 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:08:07.756 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:08:07.756 Initialization complete. Launching workers. 00:08:07.756 ======================================================== 00:08:07.756 Latency(us) 00:08:07.756 Device Information : IOPS MiB/s Average min max 00:08:07.756 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 658.00 0.32 87140.07 3359.97 1032012.29 00:08:07.756 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 8831.65 4.31 14494.53 3652.62 574415.32 00:08:07.756 ======================================================== 00:08:07.756 Total : 9489.66 4.63 19531.69 3359.97 1032012.29 00:08:07.756 00:08:07.756 11:26:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:08:07.756 11:26:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:08:08.014 true 00:08:08.014 11:26:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 68695 00:08:08.014 /home/vagrant/spdk_repo/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (68695) - No such process 00:08:08.014 11:26:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 68695 00:08:08.014 11:26:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:08.273 11:26:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:08.531 11:26:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:08:08.531 11:26:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:08:08.531 11:26:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:08:08.531 11:26:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:08.531 11:26:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:08:08.789 null0 00:08:09.046 11:26:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:09.046 11:26:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:09.046 11:26:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:08:09.305 null1 00:08:09.305 11:26:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:09.305 11:26:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:09.305 11:26:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:08:09.305 null2 00:08:09.563 11:26:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:09.563 11:26:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:09.563 11:26:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:08:09.822 null3 00:08:09.822 11:26:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:09.822 11:26:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:09.822 11:26:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:08:10.079 null4 00:08:10.079 11:26:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:10.079 11:26:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:10.079 11:26:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:08:10.337 null5 00:08:10.337 11:26:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:10.337 11:26:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:10.337 11:26:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:08:10.594 null6 00:08:10.594 11:26:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:10.594 11:26:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:10.594 11:26:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:08:10.852 null7 00:08:10.852 11:26:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:10.852 11:26:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:10.852 11:26:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:08:10.852 11:26:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:10.852 11:26:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:10.852 11:26:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:08:10.852 11:26:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:10.852 11:26:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:10.852 11:26:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:08:10.852 11:26:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:10.852 11:26:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:10.852 11:26:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:10.852 11:26:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:10.852 11:26:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:08:10.852 11:26:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:08:10.852 11:26:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:10.852 11:26:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:10.852 11:26:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:10.852 11:26:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:10.852 11:26:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:10.852 11:26:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:08:10.852 11:26:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:11.110 11:26:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:08:11.110 11:26:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:11.110 11:26:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:11.110 11:26:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:11.110 11:26:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:11.111 11:26:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:11.111 11:26:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:11.111 11:26:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:08:11.111 11:26:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:08:11.111 11:26:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:11.111 11:26:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:11.111 11:26:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:11.111 11:26:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:11.111 11:26:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:11.111 11:26:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:11.111 11:26:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:08:11.111 11:26:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:08:11.111 11:26:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:11.111 11:26:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:11.111 11:26:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:11.111 11:26:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:11.111 11:26:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:11.111 11:26:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:11.111 11:26:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:08:11.111 11:26:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:11.111 11:26:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:11.111 11:26:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:08:11.111 11:26:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:11.111 11:26:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:11.111 11:26:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:11.111 11:26:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:11.111 11:26:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:11.111 11:26:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:11.111 11:26:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:08:11.111 11:26:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:08:11.111 11:26:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:11.111 11:26:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:11.111 11:26:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:11.111 11:26:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:08:11.111 11:26:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:08:11.111 11:26:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:11.111 11:26:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:11.111 11:26:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:11.111 11:26:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:11.111 11:26:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:11.111 11:26:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:11.111 11:26:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 69711 69712 69714 69716 69718 69720 69722 69724 00:08:11.111 11:26:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:11.111 11:26:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:11.368 11:26:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:11.368 11:26:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:11.368 11:26:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:11.368 11:26:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:11.368 11:26:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:11.368 11:26:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:11.633 11:26:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:11.633 11:26:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:11.633 11:26:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:11.633 11:26:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:11.633 11:26:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:11.633 11:26:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:11.633 11:26:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:11.633 11:26:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:11.633 11:26:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:11.633 11:26:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:11.633 11:26:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:11.633 11:26:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:11.633 11:26:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:11.633 11:26:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:11.633 11:26:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:11.905 11:26:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:11.905 11:26:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:11.905 11:26:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:11.905 11:26:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:11.905 11:26:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:11.905 11:26:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:11.905 11:26:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:11.905 11:26:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:11.905 11:26:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:11.905 11:26:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:11.905 11:26:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:12.162 11:26:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:12.162 11:26:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:12.162 11:26:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:12.162 11:26:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:12.162 11:26:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:12.162 11:26:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:12.162 11:26:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:12.162 11:26:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:12.162 11:26:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:12.162 11:26:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:12.162 11:26:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:12.419 11:26:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:12.420 11:26:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:12.420 11:26:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:12.420 11:26:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:12.420 11:26:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:12.420 11:26:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:12.420 11:26:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:12.420 11:26:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:12.420 11:26:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:12.420 11:26:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:12.420 11:26:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:12.420 11:26:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:12.420 11:26:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:12.677 11:26:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:12.677 11:26:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:12.677 11:26:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:12.677 11:26:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:12.677 11:26:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:12.677 11:26:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:12.677 11:26:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:12.677 11:26:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:12.677 11:26:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:12.677 11:26:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:12.934 11:26:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:12.934 11:26:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:12.934 11:26:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:12.934 11:26:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:12.934 11:26:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:12.934 11:26:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:12.934 11:26:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:12.934 11:26:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:12.934 11:26:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:12.934 11:26:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:12.934 11:26:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:12.934 11:26:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:12.934 11:26:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:13.192 11:26:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:13.192 11:26:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:13.192 11:26:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:13.192 11:26:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:13.192 11:26:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:13.192 11:26:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:13.192 11:26:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:13.192 11:26:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:13.192 11:26:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:13.192 11:26:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:13.192 11:26:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:13.192 11:26:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:13.450 11:26:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:13.450 11:26:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:13.450 11:26:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:13.450 11:26:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:13.450 11:26:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:13.450 11:26:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:13.451 11:26:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:13.451 11:26:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:13.451 11:26:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:13.709 11:26:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:13.709 11:26:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:13.709 11:26:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:13.709 11:26:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:13.709 11:26:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:13.709 11:26:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:13.709 11:26:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:13.709 11:26:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:13.709 11:26:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:13.709 11:26:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:13.709 11:26:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:13.967 11:26:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:13.967 11:26:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:13.967 11:26:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:13.967 11:26:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:13.967 11:26:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:13.967 11:26:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:13.967 11:26:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:13.967 11:26:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:13.967 11:26:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:13.967 11:26:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:13.967 11:26:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:13.967 11:26:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:13.967 11:26:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:13.967 11:26:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:14.223 11:26:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:14.224 11:26:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:14.224 11:26:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:14.224 11:26:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:14.224 11:26:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:14.224 11:26:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:14.224 11:26:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:14.224 11:26:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:14.224 11:26:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:14.224 11:26:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:14.481 11:26:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:14.481 11:26:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:14.481 11:26:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:14.481 11:26:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:14.481 11:26:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:14.481 11:26:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:14.481 11:26:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:14.481 11:26:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:14.738 11:26:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:14.738 11:26:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:14.738 11:26:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:14.738 11:26:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:14.738 11:26:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:14.738 11:26:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:14.738 11:26:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:14.738 11:26:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:14.738 11:26:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:14.738 11:26:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:14.738 11:26:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:14.738 11:26:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:14.738 11:26:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:14.738 11:26:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:14.996 11:26:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:14.996 11:26:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:14.996 11:26:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:14.996 11:26:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:14.996 11:26:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:14.996 11:26:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:14.996 11:26:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:14.996 11:26:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:14.996 11:26:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:14.996 11:26:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:14.996 11:26:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:14.996 11:26:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:14.996 11:26:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:15.253 11:26:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:15.253 11:26:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:15.253 11:26:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:15.253 11:26:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:15.253 11:26:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:15.253 11:26:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:15.253 11:26:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:15.253 11:26:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:15.253 11:26:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:15.253 11:26:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:15.253 11:26:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:15.253 11:26:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:15.253 11:26:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:15.511 11:26:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:15.511 11:26:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:15.511 11:26:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:15.511 11:26:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:15.511 11:26:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:15.511 11:26:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:15.511 11:26:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:15.511 11:26:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:15.511 11:26:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:15.511 11:26:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:15.511 11:26:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:15.511 11:26:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:15.511 11:26:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:15.767 11:26:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:15.767 11:26:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:15.767 11:26:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:15.767 11:26:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:15.767 11:26:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:15.767 11:26:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:15.767 11:26:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:16.024 11:26:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:16.024 11:26:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:16.024 11:26:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:16.024 11:26:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:16.024 11:26:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:16.024 11:26:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:16.024 11:26:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:16.024 11:26:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:16.024 11:26:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:16.024 11:26:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:16.024 11:26:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:16.024 11:26:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:16.281 11:26:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:16.281 11:26:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:16.281 11:26:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:16.281 11:26:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:16.281 11:26:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:16.281 11:26:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:16.281 11:26:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:16.281 11:26:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:16.281 11:26:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:16.281 11:26:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:16.281 11:26:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:16.281 11:26:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:16.281 11:26:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:16.281 11:26:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:16.281 11:26:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:16.538 11:26:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:16.538 11:26:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:16.538 11:26:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:16.538 11:26:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:16.538 11:26:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:16.538 11:26:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:16.538 11:26:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:16.538 11:26:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:16.538 11:26:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:16.538 11:26:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:16.538 11:26:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:16.795 11:26:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:16.795 11:26:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:16.795 11:26:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:16.795 11:26:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:16.795 11:26:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:16.795 11:26:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:16.795 11:26:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:16.795 11:26:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:16.795 11:26:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:16.795 11:26:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:17.052 11:26:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:17.052 11:26:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:17.052 11:26:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:17.053 11:26:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:17.053 11:26:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:17.053 11:26:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:17.053 11:26:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:17.053 11:26:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:17.053 11:26:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:17.053 11:26:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:17.053 11:26:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:17.053 11:26:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:17.309 11:26:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:17.309 11:26:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:17.309 11:26:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:17.309 11:26:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:17.309 11:26:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:17.309 11:26:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:17.309 11:26:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:17.309 11:26:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:17.309 11:26:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:17.309 11:26:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:17.309 11:26:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:17.309 11:26:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:17.309 11:26:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:17.309 11:26:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:17.310 11:26:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:17.310 11:26:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:17.566 11:26:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:17.566 11:26:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:17.566 11:26:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:17.566 11:26:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:17.566 11:26:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:17.566 11:26:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:17.566 11:26:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:17.822 11:26:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:17.822 11:26:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:17.822 11:26:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:17.822 11:26:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:17.822 11:26:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:17.822 11:26:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:17.822 11:26:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:17.822 11:26:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:17.822 11:26:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:17.823 11:26:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:17.823 11:26:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:17.823 11:26:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:18.080 11:26:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:18.080 11:26:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:18.080 11:26:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:18.080 11:26:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:18.080 11:26:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:18.080 11:26:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:18.080 11:26:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:18.338 11:26:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:18.338 11:26:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:18.338 11:26:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:18.595 11:26:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:18.595 11:26:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:18.595 11:26:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:08:18.595 11:26:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:08:18.595 11:26:55 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:18.595 11:26:55 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # sync 00:08:18.595 11:26:55 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:18.595 11:26:55 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@120 -- # set +e 00:08:18.595 11:26:55 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:18.595 11:26:55 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:18.595 rmmod nvme_tcp 00:08:18.595 rmmod nvme_fabrics 00:08:18.595 rmmod nvme_keyring 00:08:18.595 11:26:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:18.595 11:26:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set -e 00:08:18.595 11:26:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # return 0 00:08:18.595 11:26:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@489 -- # '[' -n 68563 ']' 00:08:18.595 11:26:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@490 -- # killprocess 68563 00:08:18.595 11:26:56 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@948 -- # '[' -z 68563 ']' 00:08:18.595 11:26:56 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@952 -- # kill -0 68563 00:08:18.595 11:26:56 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@953 -- # uname 00:08:18.595 11:26:56 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:18.595 11:26:56 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 68563 00:08:18.595 killing process with pid 68563 00:08:18.595 11:26:56 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:08:18.595 11:26:56 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:08:18.595 11:26:56 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@966 -- # echo 'killing process with pid 68563' 00:08:18.595 11:26:56 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@967 -- # kill 68563 00:08:18.595 11:26:56 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@972 -- # wait 68563 00:08:18.854 11:26:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:18.854 11:26:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:18.854 11:26:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:18.854 11:26:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:18.854 11:26:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:18.854 11:26:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:18.854 11:26:56 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:18.854 11:26:56 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:18.854 11:26:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:08:18.854 00:08:18.854 real 0m45.583s 00:08:18.854 user 3m46.750s 00:08:18.854 sys 0m13.388s 00:08:18.854 11:26:56 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:18.854 11:26:56 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:08:18.854 ************************************ 00:08:18.854 END TEST nvmf_ns_hotplug_stress 00:08:18.854 ************************************ 00:08:18.854 11:26:56 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:08:18.854 11:26:56 nvmf_tcp -- nvmf/nvmf.sh@33 -- # run_test nvmf_connect_stress /home/vagrant/spdk_repo/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:08:18.854 11:26:56 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:08:18.854 11:26:56 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:18.854 11:26:56 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:18.854 ************************************ 00:08:18.854 START TEST nvmf_connect_stress 00:08:18.854 ************************************ 00:08:18.854 11:26:56 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:08:19.112 * Looking for test storage... 00:08:19.112 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:19.112 11:26:56 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:19.112 11:26:56 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:08:19.112 11:26:56 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:19.112 11:26:56 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:19.112 11:26:56 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:19.112 11:26:56 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:19.112 11:26:56 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:19.112 11:26:56 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:19.112 11:26:56 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:19.112 11:26:56 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:19.112 11:26:56 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:19.112 11:26:56 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:19.112 11:26:56 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:891080d4-f96c-4735-b9e2-e3ce9892e421 00:08:19.112 11:26:56 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=891080d4-f96c-4735-b9e2-e3ce9892e421 00:08:19.112 11:26:56 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:19.112 11:26:56 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:19.112 11:26:56 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:08:19.112 11:26:56 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:19.112 11:26:56 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:19.112 11:26:56 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:19.112 11:26:56 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:19.112 11:26:56 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:19.112 11:26:56 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:19.112 11:26:56 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:19.112 11:26:56 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:19.112 11:26:56 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:08:19.112 11:26:56 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:19.112 11:26:56 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@47 -- # : 0 00:08:19.112 11:26:56 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:19.112 11:26:56 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:19.112 11:26:56 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:19.112 11:26:56 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:19.112 11:26:56 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:19.112 11:26:56 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:19.112 11:26:56 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:19.112 11:26:56 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:19.112 11:26:56 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:08:19.112 11:26:56 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:19.112 11:26:56 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:19.112 11:26:56 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:19.112 11:26:56 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:19.112 11:26:56 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:19.112 11:26:56 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:19.112 11:26:56 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:19.112 11:26:56 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:19.112 11:26:56 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:08:19.112 11:26:56 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:08:19.112 11:26:56 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:08:19.112 11:26:56 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:08:19.112 11:26:56 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:08:19.112 11:26:56 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@432 -- # nvmf_veth_init 00:08:19.112 11:26:56 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:19.112 11:26:56 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:19.112 11:26:56 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:08:19.112 11:26:56 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:08:19.112 11:26:56 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:08:19.112 11:26:56 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:08:19.112 11:26:56 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:08:19.112 11:26:56 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:19.112 11:26:56 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:08:19.112 11:26:56 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:08:19.112 11:26:56 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:08:19.112 11:26:56 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:08:19.112 11:26:56 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:08:19.112 11:26:56 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:08:19.112 Cannot find device "nvmf_tgt_br" 00:08:19.112 11:26:56 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@155 -- # true 00:08:19.112 11:26:56 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:08:19.112 Cannot find device "nvmf_tgt_br2" 00:08:19.112 11:26:56 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@156 -- # true 00:08:19.112 11:26:56 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:08:19.112 11:26:56 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:08:19.112 Cannot find device "nvmf_tgt_br" 00:08:19.112 11:26:56 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@158 -- # true 00:08:19.112 11:26:56 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:08:19.112 Cannot find device "nvmf_tgt_br2" 00:08:19.112 11:26:56 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@159 -- # true 00:08:19.112 11:26:56 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:08:19.112 11:26:56 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:08:19.112 11:26:56 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:19.113 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:19.113 11:26:56 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@162 -- # true 00:08:19.113 11:26:56 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:19.113 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:19.113 11:26:56 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@163 -- # true 00:08:19.113 11:26:56 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:08:19.113 11:26:56 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:08:19.113 11:26:56 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:08:19.113 11:26:56 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:08:19.113 11:26:56 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:08:19.113 11:26:56 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:08:19.371 11:26:56 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:08:19.371 11:26:56 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:08:19.371 11:26:56 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:08:19.371 11:26:56 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:08:19.371 11:26:56 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:08:19.371 11:26:56 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:08:19.371 11:26:56 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:08:19.371 11:26:56 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:08:19.371 11:26:56 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:08:19.371 11:26:56 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:08:19.371 11:26:56 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:08:19.371 11:26:56 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:08:19.371 11:26:56 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:08:19.371 11:26:56 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:08:19.371 11:26:56 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:08:19.371 11:26:56 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:08:19.371 11:26:56 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:08:19.371 11:26:56 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:08:19.371 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:19.371 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.103 ms 00:08:19.371 00:08:19.371 --- 10.0.0.2 ping statistics --- 00:08:19.371 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:19.371 rtt min/avg/max/mdev = 0.103/0.103/0.103/0.000 ms 00:08:19.371 11:26:56 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:08:19.371 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:08:19.371 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.055 ms 00:08:19.371 00:08:19.371 --- 10.0.0.3 ping statistics --- 00:08:19.371 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:19.371 rtt min/avg/max/mdev = 0.055/0.055/0.055/0.000 ms 00:08:19.371 11:26:56 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:08:19.371 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:19.371 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.042 ms 00:08:19.371 00:08:19.371 --- 10.0.0.1 ping statistics --- 00:08:19.371 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:19.371 rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms 00:08:19.371 11:26:56 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:19.371 11:26:56 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@433 -- # return 0 00:08:19.371 11:26:56 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:19.371 11:26:56 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:19.371 11:26:56 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:19.371 11:26:56 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:19.371 11:26:56 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:19.371 11:26:56 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:19.371 11:26:56 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:19.371 11:26:56 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:08:19.371 11:26:56 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:19.371 11:26:56 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:19.371 11:26:56 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:19.371 11:26:56 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@481 -- # nvmfpid=71064 00:08:19.371 11:26:56 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:08:19.371 11:26:56 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@482 -- # waitforlisten 71064 00:08:19.371 11:26:56 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@829 -- # '[' -z 71064 ']' 00:08:19.371 11:26:56 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:19.371 11:26:56 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:19.371 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:19.371 11:26:56 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:19.371 11:26:56 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:19.371 11:26:56 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:19.371 [2024-07-15 11:26:56.846053] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:08:19.371 [2024-07-15 11:26:56.846197] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:19.627 [2024-07-15 11:26:56.987872] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:19.627 [2024-07-15 11:26:57.075113] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:19.627 [2024-07-15 11:26:57.075202] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:19.627 [2024-07-15 11:26:57.075222] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:19.627 [2024-07-15 11:26:57.075235] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:19.627 [2024-07-15 11:26:57.075246] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:19.627 [2024-07-15 11:26:57.075377] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:19.627 [2024-07-15 11:26:57.075563] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:19.627 [2024-07-15 11:26:57.076065] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:08:20.559 11:26:57 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:20.559 11:26:57 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@862 -- # return 0 00:08:20.559 11:26:57 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:20.559 11:26:57 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:20.559 11:26:57 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:20.559 11:26:57 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:20.559 11:26:57 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:20.559 11:26:57 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:20.559 11:26:57 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:20.559 [2024-07-15 11:26:57.972387] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:20.559 11:26:57 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:20.559 11:26:57 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:08:20.559 11:26:57 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:20.559 11:26:57 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:20.559 11:26:57 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:20.559 11:26:57 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:20.559 11:26:57 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:20.559 11:26:57 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:20.559 [2024-07-15 11:26:57.989886] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:20.559 11:26:57 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:20.559 11:26:57 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:08:20.559 11:26:57 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:20.559 11:26:57 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:20.559 NULL1 00:08:20.559 11:26:58 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:20.559 11:26:58 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=71123 00:08:20.559 11:26:58 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/home/vagrant/spdk_repo/spdk/test/nvmf/target/rpc.txt 00:08:20.559 11:26:58 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpc.txt 00:08:20.559 11:26:58 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /home/vagrant/spdk_repo/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:08:20.559 11:26:58 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:08:20.559 11:26:58 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:20.559 11:26:58 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:20.559 11:26:58 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:20.559 11:26:58 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:20.559 11:26:58 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:20.559 11:26:58 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:20.559 11:26:58 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:20.559 11:26:58 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:20.559 11:26:58 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:20.559 11:26:58 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:20.559 11:26:58 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:20.559 11:26:58 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:20.559 11:26:58 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:20.559 11:26:58 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:20.560 11:26:58 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:20.560 11:26:58 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:20.560 11:26:58 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:20.560 11:26:58 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:20.818 11:26:58 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:20.818 11:26:58 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:20.818 11:26:58 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:20.818 11:26:58 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:20.818 11:26:58 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:20.818 11:26:58 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:20.818 11:26:58 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:20.818 11:26:58 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:20.818 11:26:58 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:20.818 11:26:58 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:20.818 11:26:58 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:20.818 11:26:58 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:20.818 11:26:58 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:20.818 11:26:58 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:20.818 11:26:58 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:20.818 11:26:58 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:20.818 11:26:58 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:20.818 11:26:58 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:20.818 11:26:58 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:20.818 11:26:58 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:20.818 11:26:58 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:20.818 11:26:58 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:20.818 11:26:58 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 71123 00:08:20.818 11:26:58 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:20.818 11:26:58 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:20.818 11:26:58 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:21.076 11:26:58 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:21.076 11:26:58 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 71123 00:08:21.076 11:26:58 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:21.076 11:26:58 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:21.076 11:26:58 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:21.334 11:26:58 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:21.334 11:26:58 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 71123 00:08:21.334 11:26:58 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:21.334 11:26:58 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:21.334 11:26:58 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:21.592 11:26:59 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:21.592 11:26:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 71123 00:08:21.592 11:26:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:21.592 11:26:59 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:21.592 11:26:59 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:21.883 11:26:59 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:21.883 11:26:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 71123 00:08:21.883 11:26:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:21.883 11:26:59 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:21.883 11:26:59 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:22.450 11:26:59 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:22.450 11:26:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 71123 00:08:22.450 11:26:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:22.450 11:26:59 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:22.450 11:26:59 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:22.707 11:27:00 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:22.707 11:27:00 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 71123 00:08:22.707 11:27:00 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:22.707 11:27:00 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:22.707 11:27:00 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:22.965 11:27:00 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:22.965 11:27:00 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 71123 00:08:22.965 11:27:00 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:22.965 11:27:00 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:22.965 11:27:00 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:23.225 11:27:00 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:23.225 11:27:00 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 71123 00:08:23.225 11:27:00 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:23.225 11:27:00 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:23.225 11:27:00 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:23.792 11:27:00 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:23.792 11:27:00 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 71123 00:08:23.792 11:27:00 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:23.792 11:27:00 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:23.792 11:27:00 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:24.050 11:27:01 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:24.050 11:27:01 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 71123 00:08:24.050 11:27:01 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:24.050 11:27:01 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:24.050 11:27:01 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:24.309 11:27:01 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:24.309 11:27:01 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 71123 00:08:24.309 11:27:01 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:24.309 11:27:01 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:24.309 11:27:01 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:24.567 11:27:01 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:24.567 11:27:01 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 71123 00:08:24.567 11:27:01 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:24.567 11:27:01 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:24.567 11:27:01 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:24.824 11:27:02 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:24.824 11:27:02 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 71123 00:08:24.824 11:27:02 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:24.824 11:27:02 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:24.824 11:27:02 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:25.389 11:27:02 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:25.389 11:27:02 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 71123 00:08:25.389 11:27:02 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:25.389 11:27:02 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:25.389 11:27:02 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:25.646 11:27:02 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:25.646 11:27:02 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 71123 00:08:25.646 11:27:02 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:25.646 11:27:02 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:25.646 11:27:02 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:25.903 11:27:03 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:25.903 11:27:03 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 71123 00:08:25.903 11:27:03 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:25.903 11:27:03 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:25.903 11:27:03 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:26.159 11:27:03 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:26.159 11:27:03 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 71123 00:08:26.159 11:27:03 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:26.159 11:27:03 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:26.159 11:27:03 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:26.416 11:27:03 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:26.416 11:27:03 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 71123 00:08:26.416 11:27:03 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:26.416 11:27:03 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:26.416 11:27:03 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:27.056 11:27:04 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:27.056 11:27:04 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 71123 00:08:27.056 11:27:04 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:27.056 11:27:04 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:27.056 11:27:04 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:27.056 11:27:04 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:27.056 11:27:04 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 71123 00:08:27.056 11:27:04 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:27.056 11:27:04 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:27.056 11:27:04 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:27.621 11:27:04 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:27.621 11:27:04 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 71123 00:08:27.621 11:27:04 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:27.621 11:27:04 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:27.621 11:27:04 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:27.879 11:27:05 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:27.879 11:27:05 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 71123 00:08:27.879 11:27:05 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:27.879 11:27:05 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:27.879 11:27:05 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:28.137 11:27:05 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:28.137 11:27:05 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 71123 00:08:28.137 11:27:05 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:28.137 11:27:05 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:28.137 11:27:05 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:28.394 11:27:05 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:28.394 11:27:05 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 71123 00:08:28.394 11:27:05 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:28.394 11:27:05 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:28.394 11:27:05 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:28.653 11:27:06 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:28.653 11:27:06 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 71123 00:08:28.653 11:27:06 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:28.653 11:27:06 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:28.653 11:27:06 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:29.218 11:27:06 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:29.218 11:27:06 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 71123 00:08:29.218 11:27:06 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:29.218 11:27:06 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:29.218 11:27:06 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:29.490 11:27:06 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:29.490 11:27:06 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 71123 00:08:29.490 11:27:06 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:29.490 11:27:06 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:29.490 11:27:06 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:29.747 11:27:07 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:29.747 11:27:07 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 71123 00:08:29.747 11:27:07 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:29.747 11:27:07 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:29.747 11:27:07 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:30.006 11:27:07 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:30.006 11:27:07 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 71123 00:08:30.006 11:27:07 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:30.006 11:27:07 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:30.006 11:27:07 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:30.265 11:27:07 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:30.265 11:27:07 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 71123 00:08:30.265 11:27:07 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:30.265 11:27:07 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:30.265 11:27:07 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:30.829 11:27:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:30.829 11:27:08 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 71123 00:08:30.829 11:27:08 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:30.829 11:27:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:30.829 11:27:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:30.829 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:08:31.087 11:27:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:31.087 11:27:08 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 71123 00:08:31.087 /home/vagrant/spdk_repo/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (71123) - No such process 00:08:31.087 11:27:08 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 71123 00:08:31.087 11:27:08 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpc.txt 00:08:31.087 11:27:08 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:08:31.087 11:27:08 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:08:31.087 11:27:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:31.087 11:27:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@117 -- # sync 00:08:31.087 11:27:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:31.087 11:27:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@120 -- # set +e 00:08:31.087 11:27:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:31.087 11:27:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:31.087 rmmod nvme_tcp 00:08:31.087 rmmod nvme_fabrics 00:08:31.087 rmmod nvme_keyring 00:08:31.087 11:27:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:31.087 11:27:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@124 -- # set -e 00:08:31.087 11:27:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@125 -- # return 0 00:08:31.087 11:27:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@489 -- # '[' -n 71064 ']' 00:08:31.087 11:27:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@490 -- # killprocess 71064 00:08:31.087 11:27:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@948 -- # '[' -z 71064 ']' 00:08:31.087 11:27:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@952 -- # kill -0 71064 00:08:31.087 11:27:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@953 -- # uname 00:08:31.087 11:27:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:31.087 11:27:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 71064 00:08:31.087 11:27:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:08:31.087 11:27:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:08:31.087 killing process with pid 71064 00:08:31.087 11:27:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@966 -- # echo 'killing process with pid 71064' 00:08:31.087 11:27:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@967 -- # kill 71064 00:08:31.087 11:27:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@972 -- # wait 71064 00:08:31.345 11:27:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:31.345 11:27:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:31.345 11:27:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:31.345 11:27:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:31.345 11:27:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:31.345 11:27:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:31.345 11:27:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:31.345 11:27:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:31.345 11:27:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:08:31.345 00:08:31.345 real 0m12.378s 00:08:31.345 user 0m41.057s 00:08:31.345 sys 0m3.463s 00:08:31.345 11:27:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:31.345 11:27:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:31.345 ************************************ 00:08:31.345 END TEST nvmf_connect_stress 00:08:31.345 ************************************ 00:08:31.345 11:27:08 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:08:31.345 11:27:08 nvmf_tcp -- nvmf/nvmf.sh@34 -- # run_test nvmf_fused_ordering /home/vagrant/spdk_repo/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:08:31.345 11:27:08 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:08:31.345 11:27:08 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:31.345 11:27:08 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:31.345 ************************************ 00:08:31.345 START TEST nvmf_fused_ordering 00:08:31.345 ************************************ 00:08:31.345 11:27:08 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:08:31.345 * Looking for test storage... 00:08:31.345 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:31.345 11:27:08 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:31.345 11:27:08 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:08:31.662 11:27:08 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:31.662 11:27:08 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:31.662 11:27:08 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:31.662 11:27:08 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:31.662 11:27:08 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:31.662 11:27:08 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:31.662 11:27:08 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:31.662 11:27:08 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:31.662 11:27:08 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:31.662 11:27:08 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:31.662 11:27:08 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:891080d4-f96c-4735-b9e2-e3ce9892e421 00:08:31.662 11:27:08 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=891080d4-f96c-4735-b9e2-e3ce9892e421 00:08:31.662 11:27:08 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:31.662 11:27:08 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:31.662 11:27:08 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:08:31.662 11:27:08 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:31.662 11:27:08 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:31.662 11:27:08 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:31.662 11:27:08 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:31.662 11:27:08 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:31.662 11:27:08 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:31.663 11:27:08 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:31.663 11:27:08 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:31.663 11:27:08 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:08:31.663 11:27:08 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:31.663 11:27:08 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@47 -- # : 0 00:08:31.663 11:27:08 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:31.663 11:27:08 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:31.663 11:27:08 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:31.663 11:27:08 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:31.663 11:27:08 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:31.663 11:27:08 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:31.663 11:27:08 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:31.663 11:27:08 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:31.663 11:27:08 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:08:31.663 11:27:08 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:31.663 11:27:08 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:31.663 11:27:08 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:31.663 11:27:08 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:31.663 11:27:08 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:31.663 11:27:08 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:31.663 11:27:08 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:31.663 11:27:08 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:31.663 11:27:08 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:08:31.663 11:27:08 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:08:31.663 11:27:08 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:08:31.663 11:27:08 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:08:31.663 11:27:08 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:08:31.663 11:27:08 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@432 -- # nvmf_veth_init 00:08:31.663 11:27:08 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:31.663 11:27:08 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:31.663 11:27:08 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:08:31.663 11:27:08 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:08:31.663 11:27:08 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:08:31.663 11:27:08 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:08:31.663 11:27:08 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:08:31.663 11:27:08 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:31.663 11:27:08 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:08:31.663 11:27:08 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:08:31.663 11:27:08 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:08:31.663 11:27:08 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:08:31.663 11:27:08 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:08:31.663 11:27:08 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:08:31.663 Cannot find device "nvmf_tgt_br" 00:08:31.663 11:27:08 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@155 -- # true 00:08:31.663 11:27:08 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:08:31.663 Cannot find device "nvmf_tgt_br2" 00:08:31.663 11:27:08 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@156 -- # true 00:08:31.663 11:27:08 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:08:31.663 11:27:08 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:08:31.663 Cannot find device "nvmf_tgt_br" 00:08:31.663 11:27:08 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@158 -- # true 00:08:31.663 11:27:08 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:08:31.663 Cannot find device "nvmf_tgt_br2" 00:08:31.663 11:27:08 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@159 -- # true 00:08:31.663 11:27:08 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:08:31.663 11:27:08 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:08:31.663 11:27:08 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:31.663 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:31.663 11:27:08 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@162 -- # true 00:08:31.663 11:27:08 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:31.663 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:31.663 11:27:08 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@163 -- # true 00:08:31.663 11:27:08 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:08:31.663 11:27:08 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:08:31.663 11:27:08 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:08:31.663 11:27:08 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:08:31.663 11:27:08 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:08:31.663 11:27:09 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:08:31.663 11:27:09 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:08:31.663 11:27:09 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:08:31.663 11:27:09 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:08:31.663 11:27:09 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:08:31.663 11:27:09 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:08:31.663 11:27:09 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:08:31.663 11:27:09 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:08:31.663 11:27:09 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:08:31.663 11:27:09 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:08:31.663 11:27:09 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:08:31.663 11:27:09 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:08:31.663 11:27:09 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:08:31.663 11:27:09 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:08:31.921 11:27:09 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:08:31.921 11:27:09 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:08:31.921 11:27:09 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:08:31.921 11:27:09 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:08:31.921 11:27:09 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:08:31.921 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:31.921 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.114 ms 00:08:31.921 00:08:31.921 --- 10.0.0.2 ping statistics --- 00:08:31.921 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:31.921 rtt min/avg/max/mdev = 0.114/0.114/0.114/0.000 ms 00:08:31.921 11:27:09 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:08:31.921 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:08:31.921 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.058 ms 00:08:31.921 00:08:31.921 --- 10.0.0.3 ping statistics --- 00:08:31.921 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:31.921 rtt min/avg/max/mdev = 0.058/0.058/0.058/0.000 ms 00:08:31.921 11:27:09 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:08:31.921 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:31.921 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.031 ms 00:08:31.921 00:08:31.921 --- 10.0.0.1 ping statistics --- 00:08:31.921 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:31.921 rtt min/avg/max/mdev = 0.031/0.031/0.031/0.000 ms 00:08:31.921 11:27:09 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:31.921 11:27:09 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@433 -- # return 0 00:08:31.921 11:27:09 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:31.921 11:27:09 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:31.921 11:27:09 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:31.921 11:27:09 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:31.921 11:27:09 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:31.921 11:27:09 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:31.921 11:27:09 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:31.921 11:27:09 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:08:31.921 11:27:09 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:31.921 11:27:09 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:31.921 11:27:09 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:08:31.921 11:27:09 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@481 -- # nvmfpid=71443 00:08:31.921 11:27:09 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@482 -- # waitforlisten 71443 00:08:31.921 11:27:09 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@829 -- # '[' -z 71443 ']' 00:08:31.921 11:27:09 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:08:31.921 11:27:09 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:31.922 11:27:09 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:31.922 11:27:09 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:31.922 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:31.922 11:27:09 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:31.922 11:27:09 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:08:31.922 [2024-07-15 11:27:09.252769] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:08:31.922 [2024-07-15 11:27:09.252876] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:31.922 [2024-07-15 11:27:09.392643] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:32.179 [2024-07-15 11:27:09.465563] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:32.179 [2024-07-15 11:27:09.465623] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:32.179 [2024-07-15 11:27:09.465637] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:32.179 [2024-07-15 11:27:09.465648] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:32.179 [2024-07-15 11:27:09.465656] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:32.179 [2024-07-15 11:27:09.465685] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:33.115 11:27:10 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:33.115 11:27:10 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@862 -- # return 0 00:08:33.115 11:27:10 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:33.115 11:27:10 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:33.115 11:27:10 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:08:33.115 11:27:10 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:33.115 11:27:10 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:33.115 11:27:10 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:33.115 11:27:10 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:08:33.115 [2024-07-15 11:27:10.320153] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:33.115 11:27:10 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:33.115 11:27:10 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:08:33.115 11:27:10 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:33.115 11:27:10 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:08:33.115 11:27:10 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:33.115 11:27:10 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:33.115 11:27:10 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:33.115 11:27:10 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:08:33.115 [2024-07-15 11:27:10.336324] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:33.115 11:27:10 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:33.115 11:27:10 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:08:33.115 11:27:10 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:33.115 11:27:10 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:08:33.115 NULL1 00:08:33.115 11:27:10 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:33.115 11:27:10 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:08:33.115 11:27:10 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:33.115 11:27:10 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:08:33.115 11:27:10 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:33.115 11:27:10 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:08:33.115 11:27:10 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:33.115 11:27:10 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:08:33.115 11:27:10 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:33.115 11:27:10 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /home/vagrant/spdk_repo/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:08:33.115 [2024-07-15 11:27:10.389339] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:08:33.115 [2024-07-15 11:27:10.389415] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71493 ] 00:08:33.373 Attached to nqn.2016-06.io.spdk:cnode1 00:08:33.373 Namespace ID: 1 size: 1GB 00:08:33.373 fused_ordering(0) 00:08:33.373 fused_ordering(1) 00:08:33.373 fused_ordering(2) 00:08:33.373 fused_ordering(3) 00:08:33.373 fused_ordering(4) 00:08:33.373 fused_ordering(5) 00:08:33.373 fused_ordering(6) 00:08:33.373 fused_ordering(7) 00:08:33.373 fused_ordering(8) 00:08:33.373 fused_ordering(9) 00:08:33.373 fused_ordering(10) 00:08:33.373 fused_ordering(11) 00:08:33.373 fused_ordering(12) 00:08:33.373 fused_ordering(13) 00:08:33.373 fused_ordering(14) 00:08:33.373 fused_ordering(15) 00:08:33.373 fused_ordering(16) 00:08:33.373 fused_ordering(17) 00:08:33.373 fused_ordering(18) 00:08:33.373 fused_ordering(19) 00:08:33.373 fused_ordering(20) 00:08:33.373 fused_ordering(21) 00:08:33.373 fused_ordering(22) 00:08:33.373 fused_ordering(23) 00:08:33.373 fused_ordering(24) 00:08:33.373 fused_ordering(25) 00:08:33.373 fused_ordering(26) 00:08:33.373 fused_ordering(27) 00:08:33.373 fused_ordering(28) 00:08:33.373 fused_ordering(29) 00:08:33.373 fused_ordering(30) 00:08:33.373 fused_ordering(31) 00:08:33.373 fused_ordering(32) 00:08:33.373 fused_ordering(33) 00:08:33.373 fused_ordering(34) 00:08:33.373 fused_ordering(35) 00:08:33.373 fused_ordering(36) 00:08:33.373 fused_ordering(37) 00:08:33.373 fused_ordering(38) 00:08:33.373 fused_ordering(39) 00:08:33.373 fused_ordering(40) 00:08:33.373 fused_ordering(41) 00:08:33.373 fused_ordering(42) 00:08:33.373 fused_ordering(43) 00:08:33.373 fused_ordering(44) 00:08:33.373 fused_ordering(45) 00:08:33.373 fused_ordering(46) 00:08:33.373 fused_ordering(47) 00:08:33.373 fused_ordering(48) 00:08:33.373 fused_ordering(49) 00:08:33.373 fused_ordering(50) 00:08:33.373 fused_ordering(51) 00:08:33.373 fused_ordering(52) 00:08:33.373 fused_ordering(53) 00:08:33.373 fused_ordering(54) 00:08:33.373 fused_ordering(55) 00:08:33.373 fused_ordering(56) 00:08:33.373 fused_ordering(57) 00:08:33.373 fused_ordering(58) 00:08:33.373 fused_ordering(59) 00:08:33.373 fused_ordering(60) 00:08:33.373 fused_ordering(61) 00:08:33.374 fused_ordering(62) 00:08:33.374 fused_ordering(63) 00:08:33.374 fused_ordering(64) 00:08:33.374 fused_ordering(65) 00:08:33.374 fused_ordering(66) 00:08:33.374 fused_ordering(67) 00:08:33.374 fused_ordering(68) 00:08:33.374 fused_ordering(69) 00:08:33.374 fused_ordering(70) 00:08:33.374 fused_ordering(71) 00:08:33.374 fused_ordering(72) 00:08:33.374 fused_ordering(73) 00:08:33.374 fused_ordering(74) 00:08:33.374 fused_ordering(75) 00:08:33.374 fused_ordering(76) 00:08:33.374 fused_ordering(77) 00:08:33.374 fused_ordering(78) 00:08:33.374 fused_ordering(79) 00:08:33.374 fused_ordering(80) 00:08:33.374 fused_ordering(81) 00:08:33.374 fused_ordering(82) 00:08:33.374 fused_ordering(83) 00:08:33.374 fused_ordering(84) 00:08:33.374 fused_ordering(85) 00:08:33.374 fused_ordering(86) 00:08:33.374 fused_ordering(87) 00:08:33.374 fused_ordering(88) 00:08:33.374 fused_ordering(89) 00:08:33.374 fused_ordering(90) 00:08:33.374 fused_ordering(91) 00:08:33.374 fused_ordering(92) 00:08:33.374 fused_ordering(93) 00:08:33.374 fused_ordering(94) 00:08:33.374 fused_ordering(95) 00:08:33.374 fused_ordering(96) 00:08:33.374 fused_ordering(97) 00:08:33.374 fused_ordering(98) 00:08:33.374 fused_ordering(99) 00:08:33.374 fused_ordering(100) 00:08:33.374 fused_ordering(101) 00:08:33.374 fused_ordering(102) 00:08:33.374 fused_ordering(103) 00:08:33.374 fused_ordering(104) 00:08:33.374 fused_ordering(105) 00:08:33.374 fused_ordering(106) 00:08:33.374 fused_ordering(107) 00:08:33.374 fused_ordering(108) 00:08:33.374 fused_ordering(109) 00:08:33.374 fused_ordering(110) 00:08:33.374 fused_ordering(111) 00:08:33.374 fused_ordering(112) 00:08:33.374 fused_ordering(113) 00:08:33.374 fused_ordering(114) 00:08:33.374 fused_ordering(115) 00:08:33.374 fused_ordering(116) 00:08:33.374 fused_ordering(117) 00:08:33.374 fused_ordering(118) 00:08:33.374 fused_ordering(119) 00:08:33.374 fused_ordering(120) 00:08:33.374 fused_ordering(121) 00:08:33.374 fused_ordering(122) 00:08:33.374 fused_ordering(123) 00:08:33.374 fused_ordering(124) 00:08:33.374 fused_ordering(125) 00:08:33.374 fused_ordering(126) 00:08:33.374 fused_ordering(127) 00:08:33.374 fused_ordering(128) 00:08:33.374 fused_ordering(129) 00:08:33.374 fused_ordering(130) 00:08:33.374 fused_ordering(131) 00:08:33.374 fused_ordering(132) 00:08:33.374 fused_ordering(133) 00:08:33.374 fused_ordering(134) 00:08:33.374 fused_ordering(135) 00:08:33.374 fused_ordering(136) 00:08:33.374 fused_ordering(137) 00:08:33.374 fused_ordering(138) 00:08:33.374 fused_ordering(139) 00:08:33.374 fused_ordering(140) 00:08:33.374 fused_ordering(141) 00:08:33.374 fused_ordering(142) 00:08:33.374 fused_ordering(143) 00:08:33.374 fused_ordering(144) 00:08:33.374 fused_ordering(145) 00:08:33.374 fused_ordering(146) 00:08:33.374 fused_ordering(147) 00:08:33.374 fused_ordering(148) 00:08:33.374 fused_ordering(149) 00:08:33.374 fused_ordering(150) 00:08:33.374 fused_ordering(151) 00:08:33.374 fused_ordering(152) 00:08:33.374 fused_ordering(153) 00:08:33.374 fused_ordering(154) 00:08:33.374 fused_ordering(155) 00:08:33.374 fused_ordering(156) 00:08:33.374 fused_ordering(157) 00:08:33.374 fused_ordering(158) 00:08:33.374 fused_ordering(159) 00:08:33.374 fused_ordering(160) 00:08:33.374 fused_ordering(161) 00:08:33.374 fused_ordering(162) 00:08:33.374 fused_ordering(163) 00:08:33.374 fused_ordering(164) 00:08:33.374 fused_ordering(165) 00:08:33.374 fused_ordering(166) 00:08:33.374 fused_ordering(167) 00:08:33.374 fused_ordering(168) 00:08:33.374 fused_ordering(169) 00:08:33.374 fused_ordering(170) 00:08:33.374 fused_ordering(171) 00:08:33.374 fused_ordering(172) 00:08:33.374 fused_ordering(173) 00:08:33.374 fused_ordering(174) 00:08:33.374 fused_ordering(175) 00:08:33.374 fused_ordering(176) 00:08:33.374 fused_ordering(177) 00:08:33.374 fused_ordering(178) 00:08:33.374 fused_ordering(179) 00:08:33.374 fused_ordering(180) 00:08:33.374 fused_ordering(181) 00:08:33.374 fused_ordering(182) 00:08:33.374 fused_ordering(183) 00:08:33.374 fused_ordering(184) 00:08:33.374 fused_ordering(185) 00:08:33.374 fused_ordering(186) 00:08:33.374 fused_ordering(187) 00:08:33.374 fused_ordering(188) 00:08:33.374 fused_ordering(189) 00:08:33.374 fused_ordering(190) 00:08:33.374 fused_ordering(191) 00:08:33.374 fused_ordering(192) 00:08:33.374 fused_ordering(193) 00:08:33.374 fused_ordering(194) 00:08:33.374 fused_ordering(195) 00:08:33.374 fused_ordering(196) 00:08:33.374 fused_ordering(197) 00:08:33.374 fused_ordering(198) 00:08:33.374 fused_ordering(199) 00:08:33.374 fused_ordering(200) 00:08:33.374 fused_ordering(201) 00:08:33.374 fused_ordering(202) 00:08:33.374 fused_ordering(203) 00:08:33.374 fused_ordering(204) 00:08:33.374 fused_ordering(205) 00:08:33.940 fused_ordering(206) 00:08:33.940 fused_ordering(207) 00:08:33.940 fused_ordering(208) 00:08:33.940 fused_ordering(209) 00:08:33.940 fused_ordering(210) 00:08:33.940 fused_ordering(211) 00:08:33.940 fused_ordering(212) 00:08:33.940 fused_ordering(213) 00:08:33.940 fused_ordering(214) 00:08:33.940 fused_ordering(215) 00:08:33.940 fused_ordering(216) 00:08:33.940 fused_ordering(217) 00:08:33.940 fused_ordering(218) 00:08:33.940 fused_ordering(219) 00:08:33.940 fused_ordering(220) 00:08:33.940 fused_ordering(221) 00:08:33.940 fused_ordering(222) 00:08:33.940 fused_ordering(223) 00:08:33.941 fused_ordering(224) 00:08:33.941 fused_ordering(225) 00:08:33.941 fused_ordering(226) 00:08:33.941 fused_ordering(227) 00:08:33.941 fused_ordering(228) 00:08:33.941 fused_ordering(229) 00:08:33.941 fused_ordering(230) 00:08:33.941 fused_ordering(231) 00:08:33.941 fused_ordering(232) 00:08:33.941 fused_ordering(233) 00:08:33.941 fused_ordering(234) 00:08:33.941 fused_ordering(235) 00:08:33.941 fused_ordering(236) 00:08:33.941 fused_ordering(237) 00:08:33.941 fused_ordering(238) 00:08:33.941 fused_ordering(239) 00:08:33.941 fused_ordering(240) 00:08:33.941 fused_ordering(241) 00:08:33.941 fused_ordering(242) 00:08:33.941 fused_ordering(243) 00:08:33.941 fused_ordering(244) 00:08:33.941 fused_ordering(245) 00:08:33.941 fused_ordering(246) 00:08:33.941 fused_ordering(247) 00:08:33.941 fused_ordering(248) 00:08:33.941 fused_ordering(249) 00:08:33.941 fused_ordering(250) 00:08:33.941 fused_ordering(251) 00:08:33.941 fused_ordering(252) 00:08:33.941 fused_ordering(253) 00:08:33.941 fused_ordering(254) 00:08:33.941 fused_ordering(255) 00:08:33.941 fused_ordering(256) 00:08:33.941 fused_ordering(257) 00:08:33.941 fused_ordering(258) 00:08:33.941 fused_ordering(259) 00:08:33.941 fused_ordering(260) 00:08:33.941 fused_ordering(261) 00:08:33.941 fused_ordering(262) 00:08:33.941 fused_ordering(263) 00:08:33.941 fused_ordering(264) 00:08:33.941 fused_ordering(265) 00:08:33.941 fused_ordering(266) 00:08:33.941 fused_ordering(267) 00:08:33.941 fused_ordering(268) 00:08:33.941 fused_ordering(269) 00:08:33.941 fused_ordering(270) 00:08:33.941 fused_ordering(271) 00:08:33.941 fused_ordering(272) 00:08:33.941 fused_ordering(273) 00:08:33.941 fused_ordering(274) 00:08:33.941 fused_ordering(275) 00:08:33.941 fused_ordering(276) 00:08:33.941 fused_ordering(277) 00:08:33.941 fused_ordering(278) 00:08:33.941 fused_ordering(279) 00:08:33.941 fused_ordering(280) 00:08:33.941 fused_ordering(281) 00:08:33.941 fused_ordering(282) 00:08:33.941 fused_ordering(283) 00:08:33.941 fused_ordering(284) 00:08:33.941 fused_ordering(285) 00:08:33.941 fused_ordering(286) 00:08:33.941 fused_ordering(287) 00:08:33.941 fused_ordering(288) 00:08:33.941 fused_ordering(289) 00:08:33.941 fused_ordering(290) 00:08:33.941 fused_ordering(291) 00:08:33.941 fused_ordering(292) 00:08:33.941 fused_ordering(293) 00:08:33.941 fused_ordering(294) 00:08:33.941 fused_ordering(295) 00:08:33.941 fused_ordering(296) 00:08:33.941 fused_ordering(297) 00:08:33.941 fused_ordering(298) 00:08:33.941 fused_ordering(299) 00:08:33.941 fused_ordering(300) 00:08:33.941 fused_ordering(301) 00:08:33.941 fused_ordering(302) 00:08:33.941 fused_ordering(303) 00:08:33.941 fused_ordering(304) 00:08:33.941 fused_ordering(305) 00:08:33.941 fused_ordering(306) 00:08:33.941 fused_ordering(307) 00:08:33.941 fused_ordering(308) 00:08:33.941 fused_ordering(309) 00:08:33.941 fused_ordering(310) 00:08:33.941 fused_ordering(311) 00:08:33.941 fused_ordering(312) 00:08:33.941 fused_ordering(313) 00:08:33.941 fused_ordering(314) 00:08:33.941 fused_ordering(315) 00:08:33.941 fused_ordering(316) 00:08:33.941 fused_ordering(317) 00:08:33.941 fused_ordering(318) 00:08:33.941 fused_ordering(319) 00:08:33.941 fused_ordering(320) 00:08:33.941 fused_ordering(321) 00:08:33.941 fused_ordering(322) 00:08:33.941 fused_ordering(323) 00:08:33.941 fused_ordering(324) 00:08:33.941 fused_ordering(325) 00:08:33.941 fused_ordering(326) 00:08:33.941 fused_ordering(327) 00:08:33.941 fused_ordering(328) 00:08:33.941 fused_ordering(329) 00:08:33.941 fused_ordering(330) 00:08:33.941 fused_ordering(331) 00:08:33.941 fused_ordering(332) 00:08:33.941 fused_ordering(333) 00:08:33.941 fused_ordering(334) 00:08:33.941 fused_ordering(335) 00:08:33.941 fused_ordering(336) 00:08:33.941 fused_ordering(337) 00:08:33.941 fused_ordering(338) 00:08:33.941 fused_ordering(339) 00:08:33.941 fused_ordering(340) 00:08:33.941 fused_ordering(341) 00:08:33.941 fused_ordering(342) 00:08:33.941 fused_ordering(343) 00:08:33.941 fused_ordering(344) 00:08:33.941 fused_ordering(345) 00:08:33.941 fused_ordering(346) 00:08:33.941 fused_ordering(347) 00:08:33.941 fused_ordering(348) 00:08:33.941 fused_ordering(349) 00:08:33.941 fused_ordering(350) 00:08:33.941 fused_ordering(351) 00:08:33.941 fused_ordering(352) 00:08:33.941 fused_ordering(353) 00:08:33.941 fused_ordering(354) 00:08:33.941 fused_ordering(355) 00:08:33.941 fused_ordering(356) 00:08:33.941 fused_ordering(357) 00:08:33.941 fused_ordering(358) 00:08:33.941 fused_ordering(359) 00:08:33.941 fused_ordering(360) 00:08:33.941 fused_ordering(361) 00:08:33.941 fused_ordering(362) 00:08:33.941 fused_ordering(363) 00:08:33.941 fused_ordering(364) 00:08:33.941 fused_ordering(365) 00:08:33.941 fused_ordering(366) 00:08:33.941 fused_ordering(367) 00:08:33.941 fused_ordering(368) 00:08:33.941 fused_ordering(369) 00:08:33.941 fused_ordering(370) 00:08:33.941 fused_ordering(371) 00:08:33.941 fused_ordering(372) 00:08:33.941 fused_ordering(373) 00:08:33.941 fused_ordering(374) 00:08:33.941 fused_ordering(375) 00:08:33.941 fused_ordering(376) 00:08:33.941 fused_ordering(377) 00:08:33.941 fused_ordering(378) 00:08:33.941 fused_ordering(379) 00:08:33.941 fused_ordering(380) 00:08:33.941 fused_ordering(381) 00:08:33.941 fused_ordering(382) 00:08:33.941 fused_ordering(383) 00:08:33.941 fused_ordering(384) 00:08:33.941 fused_ordering(385) 00:08:33.941 fused_ordering(386) 00:08:33.941 fused_ordering(387) 00:08:33.941 fused_ordering(388) 00:08:33.941 fused_ordering(389) 00:08:33.941 fused_ordering(390) 00:08:33.941 fused_ordering(391) 00:08:33.941 fused_ordering(392) 00:08:33.941 fused_ordering(393) 00:08:33.941 fused_ordering(394) 00:08:33.941 fused_ordering(395) 00:08:33.941 fused_ordering(396) 00:08:33.941 fused_ordering(397) 00:08:33.941 fused_ordering(398) 00:08:33.941 fused_ordering(399) 00:08:33.941 fused_ordering(400) 00:08:33.941 fused_ordering(401) 00:08:33.941 fused_ordering(402) 00:08:33.941 fused_ordering(403) 00:08:33.941 fused_ordering(404) 00:08:33.941 fused_ordering(405) 00:08:33.941 fused_ordering(406) 00:08:33.941 fused_ordering(407) 00:08:33.941 fused_ordering(408) 00:08:33.941 fused_ordering(409) 00:08:33.941 fused_ordering(410) 00:08:34.199 fused_ordering(411) 00:08:34.199 fused_ordering(412) 00:08:34.199 fused_ordering(413) 00:08:34.199 fused_ordering(414) 00:08:34.199 fused_ordering(415) 00:08:34.199 fused_ordering(416) 00:08:34.199 fused_ordering(417) 00:08:34.199 fused_ordering(418) 00:08:34.199 fused_ordering(419) 00:08:34.199 fused_ordering(420) 00:08:34.199 fused_ordering(421) 00:08:34.199 fused_ordering(422) 00:08:34.199 fused_ordering(423) 00:08:34.199 fused_ordering(424) 00:08:34.199 fused_ordering(425) 00:08:34.199 fused_ordering(426) 00:08:34.199 fused_ordering(427) 00:08:34.199 fused_ordering(428) 00:08:34.199 fused_ordering(429) 00:08:34.199 fused_ordering(430) 00:08:34.199 fused_ordering(431) 00:08:34.199 fused_ordering(432) 00:08:34.199 fused_ordering(433) 00:08:34.199 fused_ordering(434) 00:08:34.199 fused_ordering(435) 00:08:34.199 fused_ordering(436) 00:08:34.199 fused_ordering(437) 00:08:34.199 fused_ordering(438) 00:08:34.199 fused_ordering(439) 00:08:34.199 fused_ordering(440) 00:08:34.199 fused_ordering(441) 00:08:34.199 fused_ordering(442) 00:08:34.199 fused_ordering(443) 00:08:34.199 fused_ordering(444) 00:08:34.199 fused_ordering(445) 00:08:34.199 fused_ordering(446) 00:08:34.199 fused_ordering(447) 00:08:34.199 fused_ordering(448) 00:08:34.199 fused_ordering(449) 00:08:34.199 fused_ordering(450) 00:08:34.199 fused_ordering(451) 00:08:34.199 fused_ordering(452) 00:08:34.199 fused_ordering(453) 00:08:34.199 fused_ordering(454) 00:08:34.199 fused_ordering(455) 00:08:34.199 fused_ordering(456) 00:08:34.199 fused_ordering(457) 00:08:34.199 fused_ordering(458) 00:08:34.199 fused_ordering(459) 00:08:34.199 fused_ordering(460) 00:08:34.199 fused_ordering(461) 00:08:34.199 fused_ordering(462) 00:08:34.199 fused_ordering(463) 00:08:34.199 fused_ordering(464) 00:08:34.199 fused_ordering(465) 00:08:34.199 fused_ordering(466) 00:08:34.199 fused_ordering(467) 00:08:34.199 fused_ordering(468) 00:08:34.199 fused_ordering(469) 00:08:34.199 fused_ordering(470) 00:08:34.199 fused_ordering(471) 00:08:34.199 fused_ordering(472) 00:08:34.199 fused_ordering(473) 00:08:34.199 fused_ordering(474) 00:08:34.199 fused_ordering(475) 00:08:34.199 fused_ordering(476) 00:08:34.199 fused_ordering(477) 00:08:34.199 fused_ordering(478) 00:08:34.199 fused_ordering(479) 00:08:34.199 fused_ordering(480) 00:08:34.199 fused_ordering(481) 00:08:34.199 fused_ordering(482) 00:08:34.199 fused_ordering(483) 00:08:34.199 fused_ordering(484) 00:08:34.199 fused_ordering(485) 00:08:34.199 fused_ordering(486) 00:08:34.199 fused_ordering(487) 00:08:34.199 fused_ordering(488) 00:08:34.199 fused_ordering(489) 00:08:34.199 fused_ordering(490) 00:08:34.199 fused_ordering(491) 00:08:34.199 fused_ordering(492) 00:08:34.199 fused_ordering(493) 00:08:34.199 fused_ordering(494) 00:08:34.199 fused_ordering(495) 00:08:34.199 fused_ordering(496) 00:08:34.199 fused_ordering(497) 00:08:34.199 fused_ordering(498) 00:08:34.199 fused_ordering(499) 00:08:34.199 fused_ordering(500) 00:08:34.199 fused_ordering(501) 00:08:34.199 fused_ordering(502) 00:08:34.199 fused_ordering(503) 00:08:34.199 fused_ordering(504) 00:08:34.199 fused_ordering(505) 00:08:34.199 fused_ordering(506) 00:08:34.199 fused_ordering(507) 00:08:34.199 fused_ordering(508) 00:08:34.199 fused_ordering(509) 00:08:34.199 fused_ordering(510) 00:08:34.199 fused_ordering(511) 00:08:34.199 fused_ordering(512) 00:08:34.199 fused_ordering(513) 00:08:34.199 fused_ordering(514) 00:08:34.199 fused_ordering(515) 00:08:34.199 fused_ordering(516) 00:08:34.199 fused_ordering(517) 00:08:34.199 fused_ordering(518) 00:08:34.199 fused_ordering(519) 00:08:34.199 fused_ordering(520) 00:08:34.199 fused_ordering(521) 00:08:34.199 fused_ordering(522) 00:08:34.199 fused_ordering(523) 00:08:34.199 fused_ordering(524) 00:08:34.199 fused_ordering(525) 00:08:34.199 fused_ordering(526) 00:08:34.199 fused_ordering(527) 00:08:34.199 fused_ordering(528) 00:08:34.199 fused_ordering(529) 00:08:34.199 fused_ordering(530) 00:08:34.199 fused_ordering(531) 00:08:34.199 fused_ordering(532) 00:08:34.199 fused_ordering(533) 00:08:34.199 fused_ordering(534) 00:08:34.199 fused_ordering(535) 00:08:34.199 fused_ordering(536) 00:08:34.199 fused_ordering(537) 00:08:34.199 fused_ordering(538) 00:08:34.199 fused_ordering(539) 00:08:34.199 fused_ordering(540) 00:08:34.199 fused_ordering(541) 00:08:34.199 fused_ordering(542) 00:08:34.199 fused_ordering(543) 00:08:34.199 fused_ordering(544) 00:08:34.199 fused_ordering(545) 00:08:34.199 fused_ordering(546) 00:08:34.199 fused_ordering(547) 00:08:34.199 fused_ordering(548) 00:08:34.199 fused_ordering(549) 00:08:34.199 fused_ordering(550) 00:08:34.199 fused_ordering(551) 00:08:34.199 fused_ordering(552) 00:08:34.199 fused_ordering(553) 00:08:34.199 fused_ordering(554) 00:08:34.199 fused_ordering(555) 00:08:34.199 fused_ordering(556) 00:08:34.199 fused_ordering(557) 00:08:34.199 fused_ordering(558) 00:08:34.199 fused_ordering(559) 00:08:34.199 fused_ordering(560) 00:08:34.199 fused_ordering(561) 00:08:34.199 fused_ordering(562) 00:08:34.199 fused_ordering(563) 00:08:34.199 fused_ordering(564) 00:08:34.199 fused_ordering(565) 00:08:34.199 fused_ordering(566) 00:08:34.199 fused_ordering(567) 00:08:34.199 fused_ordering(568) 00:08:34.199 fused_ordering(569) 00:08:34.199 fused_ordering(570) 00:08:34.199 fused_ordering(571) 00:08:34.199 fused_ordering(572) 00:08:34.199 fused_ordering(573) 00:08:34.199 fused_ordering(574) 00:08:34.199 fused_ordering(575) 00:08:34.199 fused_ordering(576) 00:08:34.199 fused_ordering(577) 00:08:34.199 fused_ordering(578) 00:08:34.199 fused_ordering(579) 00:08:34.199 fused_ordering(580) 00:08:34.199 fused_ordering(581) 00:08:34.199 fused_ordering(582) 00:08:34.199 fused_ordering(583) 00:08:34.199 fused_ordering(584) 00:08:34.199 fused_ordering(585) 00:08:34.199 fused_ordering(586) 00:08:34.199 fused_ordering(587) 00:08:34.199 fused_ordering(588) 00:08:34.199 fused_ordering(589) 00:08:34.199 fused_ordering(590) 00:08:34.199 fused_ordering(591) 00:08:34.199 fused_ordering(592) 00:08:34.199 fused_ordering(593) 00:08:34.199 fused_ordering(594) 00:08:34.199 fused_ordering(595) 00:08:34.199 fused_ordering(596) 00:08:34.199 fused_ordering(597) 00:08:34.199 fused_ordering(598) 00:08:34.199 fused_ordering(599) 00:08:34.199 fused_ordering(600) 00:08:34.199 fused_ordering(601) 00:08:34.199 fused_ordering(602) 00:08:34.199 fused_ordering(603) 00:08:34.199 fused_ordering(604) 00:08:34.199 fused_ordering(605) 00:08:34.199 fused_ordering(606) 00:08:34.199 fused_ordering(607) 00:08:34.199 fused_ordering(608) 00:08:34.199 fused_ordering(609) 00:08:34.199 fused_ordering(610) 00:08:34.199 fused_ordering(611) 00:08:34.199 fused_ordering(612) 00:08:34.199 fused_ordering(613) 00:08:34.199 fused_ordering(614) 00:08:34.199 fused_ordering(615) 00:08:34.764 fused_ordering(616) 00:08:34.764 fused_ordering(617) 00:08:34.764 fused_ordering(618) 00:08:34.764 fused_ordering(619) 00:08:34.764 fused_ordering(620) 00:08:34.764 fused_ordering(621) 00:08:34.764 fused_ordering(622) 00:08:34.764 fused_ordering(623) 00:08:34.764 fused_ordering(624) 00:08:34.764 fused_ordering(625) 00:08:34.764 fused_ordering(626) 00:08:34.764 fused_ordering(627) 00:08:34.764 fused_ordering(628) 00:08:34.764 fused_ordering(629) 00:08:34.764 fused_ordering(630) 00:08:34.764 fused_ordering(631) 00:08:34.764 fused_ordering(632) 00:08:34.764 fused_ordering(633) 00:08:34.764 fused_ordering(634) 00:08:34.764 fused_ordering(635) 00:08:34.764 fused_ordering(636) 00:08:34.764 fused_ordering(637) 00:08:34.764 fused_ordering(638) 00:08:34.764 fused_ordering(639) 00:08:34.764 fused_ordering(640) 00:08:34.764 fused_ordering(641) 00:08:34.764 fused_ordering(642) 00:08:34.764 fused_ordering(643) 00:08:34.764 fused_ordering(644) 00:08:34.764 fused_ordering(645) 00:08:34.764 fused_ordering(646) 00:08:34.764 fused_ordering(647) 00:08:34.764 fused_ordering(648) 00:08:34.764 fused_ordering(649) 00:08:34.764 fused_ordering(650) 00:08:34.764 fused_ordering(651) 00:08:34.764 fused_ordering(652) 00:08:34.764 fused_ordering(653) 00:08:34.764 fused_ordering(654) 00:08:34.764 fused_ordering(655) 00:08:34.764 fused_ordering(656) 00:08:34.764 fused_ordering(657) 00:08:34.764 fused_ordering(658) 00:08:34.764 fused_ordering(659) 00:08:34.764 fused_ordering(660) 00:08:34.764 fused_ordering(661) 00:08:34.764 fused_ordering(662) 00:08:34.764 fused_ordering(663) 00:08:34.764 fused_ordering(664) 00:08:34.764 fused_ordering(665) 00:08:34.764 fused_ordering(666) 00:08:34.764 fused_ordering(667) 00:08:34.764 fused_ordering(668) 00:08:34.764 fused_ordering(669) 00:08:34.764 fused_ordering(670) 00:08:34.764 fused_ordering(671) 00:08:34.764 fused_ordering(672) 00:08:34.764 fused_ordering(673) 00:08:34.764 fused_ordering(674) 00:08:34.764 fused_ordering(675) 00:08:34.764 fused_ordering(676) 00:08:34.764 fused_ordering(677) 00:08:34.764 fused_ordering(678) 00:08:34.764 fused_ordering(679) 00:08:34.764 fused_ordering(680) 00:08:34.764 fused_ordering(681) 00:08:34.764 fused_ordering(682) 00:08:34.764 fused_ordering(683) 00:08:34.764 fused_ordering(684) 00:08:34.764 fused_ordering(685) 00:08:34.764 fused_ordering(686) 00:08:34.764 fused_ordering(687) 00:08:34.764 fused_ordering(688) 00:08:34.764 fused_ordering(689) 00:08:34.764 fused_ordering(690) 00:08:34.764 fused_ordering(691) 00:08:34.764 fused_ordering(692) 00:08:34.764 fused_ordering(693) 00:08:34.764 fused_ordering(694) 00:08:34.764 fused_ordering(695) 00:08:34.764 fused_ordering(696) 00:08:34.764 fused_ordering(697) 00:08:34.764 fused_ordering(698) 00:08:34.764 fused_ordering(699) 00:08:34.764 fused_ordering(700) 00:08:34.764 fused_ordering(701) 00:08:34.764 fused_ordering(702) 00:08:34.764 fused_ordering(703) 00:08:34.764 fused_ordering(704) 00:08:34.764 fused_ordering(705) 00:08:34.764 fused_ordering(706) 00:08:34.764 fused_ordering(707) 00:08:34.764 fused_ordering(708) 00:08:34.764 fused_ordering(709) 00:08:34.764 fused_ordering(710) 00:08:34.764 fused_ordering(711) 00:08:34.764 fused_ordering(712) 00:08:34.764 fused_ordering(713) 00:08:34.764 fused_ordering(714) 00:08:34.764 fused_ordering(715) 00:08:34.764 fused_ordering(716) 00:08:34.764 fused_ordering(717) 00:08:34.764 fused_ordering(718) 00:08:34.764 fused_ordering(719) 00:08:34.764 fused_ordering(720) 00:08:34.764 fused_ordering(721) 00:08:34.764 fused_ordering(722) 00:08:34.764 fused_ordering(723) 00:08:34.764 fused_ordering(724) 00:08:34.764 fused_ordering(725) 00:08:34.764 fused_ordering(726) 00:08:34.764 fused_ordering(727) 00:08:34.764 fused_ordering(728) 00:08:34.764 fused_ordering(729) 00:08:34.764 fused_ordering(730) 00:08:34.764 fused_ordering(731) 00:08:34.764 fused_ordering(732) 00:08:34.764 fused_ordering(733) 00:08:34.764 fused_ordering(734) 00:08:34.764 fused_ordering(735) 00:08:34.764 fused_ordering(736) 00:08:34.764 fused_ordering(737) 00:08:34.764 fused_ordering(738) 00:08:34.764 fused_ordering(739) 00:08:34.764 fused_ordering(740) 00:08:34.764 fused_ordering(741) 00:08:34.764 fused_ordering(742) 00:08:34.764 fused_ordering(743) 00:08:34.764 fused_ordering(744) 00:08:34.764 fused_ordering(745) 00:08:34.764 fused_ordering(746) 00:08:34.764 fused_ordering(747) 00:08:34.764 fused_ordering(748) 00:08:34.764 fused_ordering(749) 00:08:34.764 fused_ordering(750) 00:08:34.764 fused_ordering(751) 00:08:34.764 fused_ordering(752) 00:08:34.764 fused_ordering(753) 00:08:34.764 fused_ordering(754) 00:08:34.764 fused_ordering(755) 00:08:34.764 fused_ordering(756) 00:08:34.764 fused_ordering(757) 00:08:34.764 fused_ordering(758) 00:08:34.764 fused_ordering(759) 00:08:34.764 fused_ordering(760) 00:08:34.764 fused_ordering(761) 00:08:34.764 fused_ordering(762) 00:08:34.764 fused_ordering(763) 00:08:34.764 fused_ordering(764) 00:08:34.764 fused_ordering(765) 00:08:34.764 fused_ordering(766) 00:08:34.764 fused_ordering(767) 00:08:34.764 fused_ordering(768) 00:08:34.764 fused_ordering(769) 00:08:34.764 fused_ordering(770) 00:08:34.764 fused_ordering(771) 00:08:34.764 fused_ordering(772) 00:08:34.764 fused_ordering(773) 00:08:34.764 fused_ordering(774) 00:08:34.764 fused_ordering(775) 00:08:34.764 fused_ordering(776) 00:08:34.764 fused_ordering(777) 00:08:34.764 fused_ordering(778) 00:08:34.764 fused_ordering(779) 00:08:34.764 fused_ordering(780) 00:08:34.764 fused_ordering(781) 00:08:34.764 fused_ordering(782) 00:08:34.764 fused_ordering(783) 00:08:34.764 fused_ordering(784) 00:08:34.764 fused_ordering(785) 00:08:34.764 fused_ordering(786) 00:08:34.764 fused_ordering(787) 00:08:34.764 fused_ordering(788) 00:08:34.764 fused_ordering(789) 00:08:34.764 fused_ordering(790) 00:08:34.764 fused_ordering(791) 00:08:34.764 fused_ordering(792) 00:08:34.764 fused_ordering(793) 00:08:34.764 fused_ordering(794) 00:08:34.764 fused_ordering(795) 00:08:34.764 fused_ordering(796) 00:08:34.764 fused_ordering(797) 00:08:34.764 fused_ordering(798) 00:08:34.764 fused_ordering(799) 00:08:34.764 fused_ordering(800) 00:08:34.764 fused_ordering(801) 00:08:34.764 fused_ordering(802) 00:08:34.764 fused_ordering(803) 00:08:34.764 fused_ordering(804) 00:08:34.764 fused_ordering(805) 00:08:34.764 fused_ordering(806) 00:08:34.764 fused_ordering(807) 00:08:34.764 fused_ordering(808) 00:08:34.764 fused_ordering(809) 00:08:34.764 fused_ordering(810) 00:08:34.764 fused_ordering(811) 00:08:34.764 fused_ordering(812) 00:08:34.764 fused_ordering(813) 00:08:34.764 fused_ordering(814) 00:08:34.764 fused_ordering(815) 00:08:34.764 fused_ordering(816) 00:08:34.764 fused_ordering(817) 00:08:34.764 fused_ordering(818) 00:08:34.764 fused_ordering(819) 00:08:34.764 fused_ordering(820) 00:08:35.330 fused_ordering(821) 00:08:35.330 fused_ordering(822) 00:08:35.330 fused_ordering(823) 00:08:35.330 fused_ordering(824) 00:08:35.330 fused_ordering(825) 00:08:35.330 fused_ordering(826) 00:08:35.330 fused_ordering(827) 00:08:35.330 fused_ordering(828) 00:08:35.330 fused_ordering(829) 00:08:35.330 fused_ordering(830) 00:08:35.330 fused_ordering(831) 00:08:35.330 fused_ordering(832) 00:08:35.330 fused_ordering(833) 00:08:35.330 fused_ordering(834) 00:08:35.330 fused_ordering(835) 00:08:35.330 fused_ordering(836) 00:08:35.330 fused_ordering(837) 00:08:35.330 fused_ordering(838) 00:08:35.330 fused_ordering(839) 00:08:35.330 fused_ordering(840) 00:08:35.330 fused_ordering(841) 00:08:35.330 fused_ordering(842) 00:08:35.330 fused_ordering(843) 00:08:35.330 fused_ordering(844) 00:08:35.330 fused_ordering(845) 00:08:35.330 fused_ordering(846) 00:08:35.330 fused_ordering(847) 00:08:35.330 fused_ordering(848) 00:08:35.330 fused_ordering(849) 00:08:35.330 fused_ordering(850) 00:08:35.330 fused_ordering(851) 00:08:35.330 fused_ordering(852) 00:08:35.330 fused_ordering(853) 00:08:35.330 fused_ordering(854) 00:08:35.330 fused_ordering(855) 00:08:35.330 fused_ordering(856) 00:08:35.330 fused_ordering(857) 00:08:35.330 fused_ordering(858) 00:08:35.330 fused_ordering(859) 00:08:35.330 fused_ordering(860) 00:08:35.330 fused_ordering(861) 00:08:35.330 fused_ordering(862) 00:08:35.330 fused_ordering(863) 00:08:35.330 fused_ordering(864) 00:08:35.330 fused_ordering(865) 00:08:35.330 fused_ordering(866) 00:08:35.330 fused_ordering(867) 00:08:35.330 fused_ordering(868) 00:08:35.330 fused_ordering(869) 00:08:35.330 fused_ordering(870) 00:08:35.330 fused_ordering(871) 00:08:35.330 fused_ordering(872) 00:08:35.330 fused_ordering(873) 00:08:35.330 fused_ordering(874) 00:08:35.330 fused_ordering(875) 00:08:35.330 fused_ordering(876) 00:08:35.330 fused_ordering(877) 00:08:35.330 fused_ordering(878) 00:08:35.330 fused_ordering(879) 00:08:35.330 fused_ordering(880) 00:08:35.330 fused_ordering(881) 00:08:35.330 fused_ordering(882) 00:08:35.330 fused_ordering(883) 00:08:35.330 fused_ordering(884) 00:08:35.330 fused_ordering(885) 00:08:35.330 fused_ordering(886) 00:08:35.330 fused_ordering(887) 00:08:35.330 fused_ordering(888) 00:08:35.330 fused_ordering(889) 00:08:35.330 fused_ordering(890) 00:08:35.330 fused_ordering(891) 00:08:35.330 fused_ordering(892) 00:08:35.330 fused_ordering(893) 00:08:35.330 fused_ordering(894) 00:08:35.330 fused_ordering(895) 00:08:35.330 fused_ordering(896) 00:08:35.330 fused_ordering(897) 00:08:35.330 fused_ordering(898) 00:08:35.330 fused_ordering(899) 00:08:35.330 fused_ordering(900) 00:08:35.330 fused_ordering(901) 00:08:35.330 fused_ordering(902) 00:08:35.330 fused_ordering(903) 00:08:35.330 fused_ordering(904) 00:08:35.330 fused_ordering(905) 00:08:35.330 fused_ordering(906) 00:08:35.330 fused_ordering(907) 00:08:35.330 fused_ordering(908) 00:08:35.330 fused_ordering(909) 00:08:35.330 fused_ordering(910) 00:08:35.330 fused_ordering(911) 00:08:35.330 fused_ordering(912) 00:08:35.330 fused_ordering(913) 00:08:35.330 fused_ordering(914) 00:08:35.330 fused_ordering(915) 00:08:35.330 fused_ordering(916) 00:08:35.330 fused_ordering(917) 00:08:35.330 fused_ordering(918) 00:08:35.330 fused_ordering(919) 00:08:35.330 fused_ordering(920) 00:08:35.330 fused_ordering(921) 00:08:35.330 fused_ordering(922) 00:08:35.330 fused_ordering(923) 00:08:35.330 fused_ordering(924) 00:08:35.330 fused_ordering(925) 00:08:35.330 fused_ordering(926) 00:08:35.330 fused_ordering(927) 00:08:35.330 fused_ordering(928) 00:08:35.330 fused_ordering(929) 00:08:35.330 fused_ordering(930) 00:08:35.330 fused_ordering(931) 00:08:35.330 fused_ordering(932) 00:08:35.330 fused_ordering(933) 00:08:35.330 fused_ordering(934) 00:08:35.330 fused_ordering(935) 00:08:35.330 fused_ordering(936) 00:08:35.330 fused_ordering(937) 00:08:35.330 fused_ordering(938) 00:08:35.330 fused_ordering(939) 00:08:35.330 fused_ordering(940) 00:08:35.330 fused_ordering(941) 00:08:35.330 fused_ordering(942) 00:08:35.330 fused_ordering(943) 00:08:35.330 fused_ordering(944) 00:08:35.330 fused_ordering(945) 00:08:35.330 fused_ordering(946) 00:08:35.330 fused_ordering(947) 00:08:35.330 fused_ordering(948) 00:08:35.330 fused_ordering(949) 00:08:35.330 fused_ordering(950) 00:08:35.330 fused_ordering(951) 00:08:35.330 fused_ordering(952) 00:08:35.330 fused_ordering(953) 00:08:35.330 fused_ordering(954) 00:08:35.330 fused_ordering(955) 00:08:35.330 fused_ordering(956) 00:08:35.330 fused_ordering(957) 00:08:35.330 fused_ordering(958) 00:08:35.330 fused_ordering(959) 00:08:35.330 fused_ordering(960) 00:08:35.330 fused_ordering(961) 00:08:35.330 fused_ordering(962) 00:08:35.330 fused_ordering(963) 00:08:35.330 fused_ordering(964) 00:08:35.330 fused_ordering(965) 00:08:35.330 fused_ordering(966) 00:08:35.330 fused_ordering(967) 00:08:35.330 fused_ordering(968) 00:08:35.330 fused_ordering(969) 00:08:35.330 fused_ordering(970) 00:08:35.330 fused_ordering(971) 00:08:35.330 fused_ordering(972) 00:08:35.330 fused_ordering(973) 00:08:35.330 fused_ordering(974) 00:08:35.330 fused_ordering(975) 00:08:35.330 fused_ordering(976) 00:08:35.330 fused_ordering(977) 00:08:35.330 fused_ordering(978) 00:08:35.330 fused_ordering(979) 00:08:35.330 fused_ordering(980) 00:08:35.330 fused_ordering(981) 00:08:35.330 fused_ordering(982) 00:08:35.330 fused_ordering(983) 00:08:35.330 fused_ordering(984) 00:08:35.330 fused_ordering(985) 00:08:35.330 fused_ordering(986) 00:08:35.330 fused_ordering(987) 00:08:35.330 fused_ordering(988) 00:08:35.330 fused_ordering(989) 00:08:35.330 fused_ordering(990) 00:08:35.330 fused_ordering(991) 00:08:35.330 fused_ordering(992) 00:08:35.330 fused_ordering(993) 00:08:35.330 fused_ordering(994) 00:08:35.330 fused_ordering(995) 00:08:35.330 fused_ordering(996) 00:08:35.330 fused_ordering(997) 00:08:35.330 fused_ordering(998) 00:08:35.330 fused_ordering(999) 00:08:35.330 fused_ordering(1000) 00:08:35.330 fused_ordering(1001) 00:08:35.330 fused_ordering(1002) 00:08:35.330 fused_ordering(1003) 00:08:35.330 fused_ordering(1004) 00:08:35.330 fused_ordering(1005) 00:08:35.330 fused_ordering(1006) 00:08:35.330 fused_ordering(1007) 00:08:35.330 fused_ordering(1008) 00:08:35.330 fused_ordering(1009) 00:08:35.330 fused_ordering(1010) 00:08:35.330 fused_ordering(1011) 00:08:35.330 fused_ordering(1012) 00:08:35.330 fused_ordering(1013) 00:08:35.330 fused_ordering(1014) 00:08:35.330 fused_ordering(1015) 00:08:35.330 fused_ordering(1016) 00:08:35.330 fused_ordering(1017) 00:08:35.330 fused_ordering(1018) 00:08:35.330 fused_ordering(1019) 00:08:35.330 fused_ordering(1020) 00:08:35.330 fused_ordering(1021) 00:08:35.330 fused_ordering(1022) 00:08:35.330 fused_ordering(1023) 00:08:35.330 11:27:12 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:08:35.330 11:27:12 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:08:35.330 11:27:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:35.330 11:27:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@117 -- # sync 00:08:35.330 11:27:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:35.330 11:27:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@120 -- # set +e 00:08:35.330 11:27:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:35.330 11:27:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:35.330 rmmod nvme_tcp 00:08:35.330 rmmod nvme_fabrics 00:08:35.330 rmmod nvme_keyring 00:08:35.330 11:27:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:35.330 11:27:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set -e 00:08:35.330 11:27:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@125 -- # return 0 00:08:35.330 11:27:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@489 -- # '[' -n 71443 ']' 00:08:35.330 11:27:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@490 -- # killprocess 71443 00:08:35.330 11:27:12 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@948 -- # '[' -z 71443 ']' 00:08:35.330 11:27:12 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@952 -- # kill -0 71443 00:08:35.330 11:27:12 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@953 -- # uname 00:08:35.330 11:27:12 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:35.330 11:27:12 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 71443 00:08:35.330 killing process with pid 71443 00:08:35.330 11:27:12 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:08:35.330 11:27:12 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:08:35.330 11:27:12 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@966 -- # echo 'killing process with pid 71443' 00:08:35.330 11:27:12 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@967 -- # kill 71443 00:08:35.330 11:27:12 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@972 -- # wait 71443 00:08:35.588 11:27:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:35.588 11:27:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:35.588 11:27:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:35.588 11:27:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:35.588 11:27:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:35.588 11:27:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:35.588 11:27:12 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:35.588 11:27:12 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:35.588 11:27:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:08:35.588 00:08:35.588 real 0m4.127s 00:08:35.588 user 0m5.046s 00:08:35.588 sys 0m1.364s 00:08:35.588 11:27:12 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:35.588 ************************************ 00:08:35.588 END TEST nvmf_fused_ordering 00:08:35.588 11:27:12 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:08:35.588 ************************************ 00:08:35.588 11:27:12 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:08:35.588 11:27:12 nvmf_tcp -- nvmf/nvmf.sh@35 -- # run_test nvmf_delete_subsystem /home/vagrant/spdk_repo/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:08:35.588 11:27:12 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:08:35.588 11:27:12 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:35.588 11:27:12 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:35.588 ************************************ 00:08:35.588 START TEST nvmf_delete_subsystem 00:08:35.588 ************************************ 00:08:35.588 11:27:12 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:08:35.588 * Looking for test storage... 00:08:35.588 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:35.588 11:27:12 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:35.588 11:27:13 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:08:35.588 11:27:13 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:35.588 11:27:13 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:35.588 11:27:13 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:35.588 11:27:13 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:35.588 11:27:13 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:35.588 11:27:13 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:35.588 11:27:13 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:35.588 11:27:13 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:35.588 11:27:13 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:35.588 11:27:13 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:35.588 11:27:13 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:891080d4-f96c-4735-b9e2-e3ce9892e421 00:08:35.588 11:27:13 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=891080d4-f96c-4735-b9e2-e3ce9892e421 00:08:35.588 11:27:13 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:35.588 11:27:13 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:35.588 11:27:13 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:08:35.588 11:27:13 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:35.588 11:27:13 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:35.588 11:27:13 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:35.588 11:27:13 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:35.588 11:27:13 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:35.588 11:27:13 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:35.588 11:27:13 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:35.588 11:27:13 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:35.588 11:27:13 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:08:35.588 11:27:13 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:35.588 11:27:13 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@47 -- # : 0 00:08:35.588 11:27:13 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:35.588 11:27:13 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:35.588 11:27:13 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:35.588 11:27:13 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:35.588 11:27:13 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:35.588 11:27:13 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:35.588 11:27:13 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:35.588 11:27:13 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:35.588 11:27:13 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:08:35.588 11:27:13 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:35.588 11:27:13 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:35.588 11:27:13 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:35.588 11:27:13 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:35.588 11:27:13 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:35.588 11:27:13 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:35.588 11:27:13 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:35.588 11:27:13 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:35.588 11:27:13 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:08:35.588 11:27:13 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:08:35.588 11:27:13 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:08:35.588 11:27:13 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:08:35.588 11:27:13 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:08:35.588 11:27:13 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@432 -- # nvmf_veth_init 00:08:35.588 11:27:13 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:35.588 11:27:13 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:35.588 11:27:13 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:08:35.588 11:27:13 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:08:35.588 11:27:13 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:08:35.588 11:27:13 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:08:35.588 11:27:13 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:08:35.588 11:27:13 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:35.588 11:27:13 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:08:35.588 11:27:13 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:08:35.588 11:27:13 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:08:35.588 11:27:13 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:08:35.589 11:27:13 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:08:35.589 11:27:13 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:08:35.589 Cannot find device "nvmf_tgt_br" 00:08:35.589 11:27:13 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@155 -- # true 00:08:35.589 11:27:13 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:08:35.846 Cannot find device "nvmf_tgt_br2" 00:08:35.846 11:27:13 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@156 -- # true 00:08:35.846 11:27:13 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:08:35.846 11:27:13 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:08:35.846 Cannot find device "nvmf_tgt_br" 00:08:35.846 11:27:13 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@158 -- # true 00:08:35.846 11:27:13 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:08:35.846 Cannot find device "nvmf_tgt_br2" 00:08:35.846 11:27:13 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@159 -- # true 00:08:35.846 11:27:13 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:08:35.846 11:27:13 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:08:35.846 11:27:13 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:35.846 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:35.846 11:27:13 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@162 -- # true 00:08:35.846 11:27:13 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:35.846 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:35.846 11:27:13 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@163 -- # true 00:08:35.846 11:27:13 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:08:35.846 11:27:13 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:08:35.846 11:27:13 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:08:35.846 11:27:13 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:08:35.846 11:27:13 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:08:35.846 11:27:13 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:08:35.846 11:27:13 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:08:35.846 11:27:13 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:08:35.846 11:27:13 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:08:35.846 11:27:13 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:08:35.846 11:27:13 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:08:35.846 11:27:13 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:08:35.846 11:27:13 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:08:35.846 11:27:13 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:08:35.846 11:27:13 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:08:35.846 11:27:13 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:08:35.846 11:27:13 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:08:35.846 11:27:13 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:08:35.846 11:27:13 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:08:35.846 11:27:13 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:08:35.846 11:27:13 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:08:36.104 11:27:13 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:08:36.104 11:27:13 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:08:36.104 11:27:13 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:08:36.104 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:36.104 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.143 ms 00:08:36.104 00:08:36.104 --- 10.0.0.2 ping statistics --- 00:08:36.104 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:36.104 rtt min/avg/max/mdev = 0.143/0.143/0.143/0.000 ms 00:08:36.104 11:27:13 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:08:36.104 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:08:36.104 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.041 ms 00:08:36.104 00:08:36.104 --- 10.0.0.3 ping statistics --- 00:08:36.104 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:36.104 rtt min/avg/max/mdev = 0.041/0.041/0.041/0.000 ms 00:08:36.104 11:27:13 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:08:36.104 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:36.104 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.030 ms 00:08:36.104 00:08:36.104 --- 10.0.0.1 ping statistics --- 00:08:36.104 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:36.104 rtt min/avg/max/mdev = 0.030/0.030/0.030/0.000 ms 00:08:36.104 11:27:13 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:36.104 11:27:13 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@433 -- # return 0 00:08:36.104 11:27:13 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:36.104 11:27:13 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:36.104 11:27:13 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:36.104 11:27:13 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:36.104 11:27:13 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:36.104 11:27:13 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:36.104 11:27:13 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:36.104 11:27:13 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:08:36.104 11:27:13 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:36.104 11:27:13 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:36.104 11:27:13 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:36.104 11:27:13 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@481 -- # nvmfpid=71704 00:08:36.104 11:27:13 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # waitforlisten 71704 00:08:36.104 11:27:13 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:08:36.104 11:27:13 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@829 -- # '[' -z 71704 ']' 00:08:36.104 11:27:13 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:36.104 11:27:13 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:36.104 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:36.104 11:27:13 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:36.105 11:27:13 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:36.105 11:27:13 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:36.105 [2024-07-15 11:27:13.454426] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:08:36.105 [2024-07-15 11:27:13.454526] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:36.362 [2024-07-15 11:27:13.590582] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:36.362 [2024-07-15 11:27:13.650242] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:36.362 [2024-07-15 11:27:13.650290] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:36.362 [2024-07-15 11:27:13.650300] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:36.362 [2024-07-15 11:27:13.650308] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:36.362 [2024-07-15 11:27:13.650315] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:36.362 [2024-07-15 11:27:13.650478] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:36.362 [2024-07-15 11:27:13.650487] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:36.362 11:27:13 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:36.362 11:27:13 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@862 -- # return 0 00:08:36.362 11:27:13 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:36.362 11:27:13 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:36.362 11:27:13 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:36.362 11:27:13 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:36.362 11:27:13 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:36.362 11:27:13 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:36.362 11:27:13 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:36.362 [2024-07-15 11:27:13.779197] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:36.362 11:27:13 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:36.362 11:27:13 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:08:36.362 11:27:13 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:36.362 11:27:13 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:36.362 11:27:13 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:36.362 11:27:13 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:36.362 11:27:13 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:36.362 11:27:13 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:36.362 [2024-07-15 11:27:13.795619] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:36.362 11:27:13 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:36.362 11:27:13 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:08:36.362 11:27:13 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:36.362 11:27:13 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:36.362 NULL1 00:08:36.362 11:27:13 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:36.362 11:27:13 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:08:36.362 11:27:13 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:36.362 11:27:13 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:36.362 Delay0 00:08:36.362 11:27:13 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:36.362 11:27:13 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:36.362 11:27:13 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:36.362 11:27:13 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:36.362 11:27:13 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:36.362 11:27:13 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=71736 00:08:36.362 11:27:13 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:08:36.362 11:27:13 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:08:36.620 [2024-07-15 11:27:14.000039] subsystem.c:1568:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:08:38.547 11:27:15 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:38.547 11:27:15 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:38.547 11:27:15 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:38.805 Read completed with error (sct=0, sc=8) 00:08:38.805 Write completed with error (sct=0, sc=8) 00:08:38.805 starting I/O failed: -6 00:08:38.805 Read completed with error (sct=0, sc=8) 00:08:38.805 Write completed with error (sct=0, sc=8) 00:08:38.805 Write completed with error (sct=0, sc=8) 00:08:38.805 Write completed with error (sct=0, sc=8) 00:08:38.805 starting I/O failed: -6 00:08:38.805 Read completed with error (sct=0, sc=8) 00:08:38.805 Write completed with error (sct=0, sc=8) 00:08:38.805 Write completed with error (sct=0, sc=8) 00:08:38.805 Read completed with error (sct=0, sc=8) 00:08:38.805 starting I/O failed: -6 00:08:38.805 Read completed with error (sct=0, sc=8) 00:08:38.805 Read completed with error (sct=0, sc=8) 00:08:38.805 Read completed with error (sct=0, sc=8) 00:08:38.805 Read completed with error (sct=0, sc=8) 00:08:38.805 starting I/O failed: -6 00:08:38.805 Read completed with error (sct=0, sc=8) 00:08:38.805 Read completed with error (sct=0, sc=8) 00:08:38.805 Write completed with error (sct=0, sc=8) 00:08:38.805 Write completed with error (sct=0, sc=8) 00:08:38.805 starting I/O failed: -6 00:08:38.805 Read completed with error (sct=0, sc=8) 00:08:38.805 Read completed with error (sct=0, sc=8) 00:08:38.805 Read completed with error (sct=0, sc=8) 00:08:38.805 Write completed with error (sct=0, sc=8) 00:08:38.805 starting I/O failed: -6 00:08:38.805 Write completed with error (sct=0, sc=8) 00:08:38.805 Write completed with error (sct=0, sc=8) 00:08:38.805 Read completed with error (sct=0, sc=8) 00:08:38.805 Read completed with error (sct=0, sc=8) 00:08:38.805 starting I/O failed: -6 00:08:38.805 Read completed with error (sct=0, sc=8) 00:08:38.805 Read completed with error (sct=0, sc=8) 00:08:38.805 Read completed with error (sct=0, sc=8) 00:08:38.805 Write completed with error (sct=0, sc=8) 00:08:38.805 starting I/O failed: -6 00:08:38.805 Read completed with error (sct=0, sc=8) 00:08:38.805 Read completed with error (sct=0, sc=8) 00:08:38.805 Read completed with error (sct=0, sc=8) 00:08:38.805 Read completed with error (sct=0, sc=8) 00:08:38.805 starting I/O failed: -6 00:08:38.805 Write completed with error (sct=0, sc=8) 00:08:38.805 Read completed with error (sct=0, sc=8) 00:08:38.805 Write completed with error (sct=0, sc=8) 00:08:38.805 Read completed with error (sct=0, sc=8) 00:08:38.805 starting I/O failed: -6 00:08:38.805 Write completed with error (sct=0, sc=8) 00:08:38.805 Read completed with error (sct=0, sc=8) 00:08:38.805 Read completed with error (sct=0, sc=8) 00:08:38.805 Read completed with error (sct=0, sc=8) 00:08:38.805 starting I/O failed: -6 00:08:38.805 Read completed with error (sct=0, sc=8) 00:08:38.805 Read completed with error (sct=0, sc=8) 00:08:38.805 Write completed with error (sct=0, sc=8) 00:08:38.805 [2024-07-15 11:27:16.038202] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f6568000c00 is same with the state(5) to be set 00:08:38.805 Write completed with error (sct=0, sc=8) 00:08:38.805 Write completed with error (sct=0, sc=8) 00:08:38.805 Read completed with error (sct=0, sc=8) 00:08:38.805 starting I/O failed: -6 00:08:38.805 Read completed with error (sct=0, sc=8) 00:08:38.805 Read completed with error (sct=0, sc=8) 00:08:38.805 Read completed with error (sct=0, sc=8) 00:08:38.805 Read completed with error (sct=0, sc=8) 00:08:38.805 Read completed with error (sct=0, sc=8) 00:08:38.805 Read completed with error (sct=0, sc=8) 00:08:38.805 Write completed with error (sct=0, sc=8) 00:08:38.805 starting I/O failed: -6 00:08:38.805 Write completed with error (sct=0, sc=8) 00:08:38.805 Read completed with error (sct=0, sc=8) 00:08:38.805 Read completed with error (sct=0, sc=8) 00:08:38.805 Read completed with error (sct=0, sc=8) 00:08:38.805 Read completed with error (sct=0, sc=8) 00:08:38.805 Read completed with error (sct=0, sc=8) 00:08:38.805 Read completed with error (sct=0, sc=8) 00:08:38.805 Read completed with error (sct=0, sc=8) 00:08:38.805 Read completed with error (sct=0, sc=8) 00:08:38.805 Write completed with error (sct=0, sc=8) 00:08:38.805 Read completed with error (sct=0, sc=8) 00:08:38.805 starting I/O failed: -6 00:08:38.805 Write completed with error (sct=0, sc=8) 00:08:38.805 Read completed with error (sct=0, sc=8) 00:08:38.805 Read completed with error (sct=0, sc=8) 00:08:38.805 Read completed with error (sct=0, sc=8) 00:08:38.805 starting I/O failed: -6 00:08:38.805 Read completed with error (sct=0, sc=8) 00:08:38.805 Write completed with error (sct=0, sc=8) 00:08:38.805 Write completed with error (sct=0, sc=8) 00:08:38.805 Read completed with error (sct=0, sc=8) 00:08:38.805 starting I/O failed: -6 00:08:38.805 Read completed with error (sct=0, sc=8) 00:08:38.805 Read completed with error (sct=0, sc=8) 00:08:38.805 Read completed with error (sct=0, sc=8) 00:08:38.805 Read completed with error (sct=0, sc=8) 00:08:38.805 starting I/O failed: -6 00:08:38.805 Read completed with error (sct=0, sc=8) 00:08:38.805 Read completed with error (sct=0, sc=8) 00:08:38.805 Read completed with error (sct=0, sc=8) 00:08:38.805 Read completed with error (sct=0, sc=8) 00:08:38.805 starting I/O failed: -6 00:08:38.805 Read completed with error (sct=0, sc=8) 00:08:38.805 Write completed with error (sct=0, sc=8) 00:08:38.805 Write completed with error (sct=0, sc=8) 00:08:38.805 Read completed with error (sct=0, sc=8) 00:08:38.805 starting I/O failed: -6 00:08:38.805 Read completed with error (sct=0, sc=8) 00:08:38.805 Write completed with error (sct=0, sc=8) 00:08:38.805 Read completed with error (sct=0, sc=8) 00:08:38.805 Read completed with error (sct=0, sc=8) 00:08:38.805 starting I/O failed: -6 00:08:38.805 Read completed with error (sct=0, sc=8) 00:08:38.805 Read completed with error (sct=0, sc=8) 00:08:38.805 Read completed with error (sct=0, sc=8) 00:08:38.805 Read completed with error (sct=0, sc=8) 00:08:38.805 starting I/O failed: -6 00:08:38.805 Read completed with error (sct=0, sc=8) 00:08:38.805 Write completed with error (sct=0, sc=8) 00:08:38.805 Write completed with error (sct=0, sc=8) 00:08:38.805 Read completed with error (sct=0, sc=8) 00:08:38.805 [2024-07-15 11:27:16.038953] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1edd6f0 is same with the state(5) to be set 00:08:38.805 Write completed with error (sct=0, sc=8) 00:08:38.805 Write completed with error (sct=0, sc=8) 00:08:38.805 Write completed with error (sct=0, sc=8) 00:08:38.805 Read completed with error (sct=0, sc=8) 00:08:38.805 Read completed with error (sct=0, sc=8) 00:08:38.805 Read completed with error (sct=0, sc=8) 00:08:38.805 Read completed with error (sct=0, sc=8) 00:08:38.805 Write completed with error (sct=0, sc=8) 00:08:38.805 Read completed with error (sct=0, sc=8) 00:08:38.805 Read completed with error (sct=0, sc=8) 00:08:38.805 Write completed with error (sct=0, sc=8) 00:08:38.805 Read completed with error (sct=0, sc=8) 00:08:38.805 Read completed with error (sct=0, sc=8) 00:08:38.805 Read completed with error (sct=0, sc=8) 00:08:38.805 Read completed with error (sct=0, sc=8) 00:08:38.805 Read completed with error (sct=0, sc=8) 00:08:38.805 Read completed with error (sct=0, sc=8) 00:08:38.805 Read completed with error (sct=0, sc=8) 00:08:38.805 Write completed with error (sct=0, sc=8) 00:08:38.805 Read completed with error (sct=0, sc=8) 00:08:38.805 Read completed with error (sct=0, sc=8) 00:08:38.805 Read completed with error (sct=0, sc=8) 00:08:38.805 Read completed with error (sct=0, sc=8) 00:08:38.805 Write completed with error (sct=0, sc=8) 00:08:38.805 Read completed with error (sct=0, sc=8) 00:08:38.805 Read completed with error (sct=0, sc=8) 00:08:38.806 Read completed with error (sct=0, sc=8) 00:08:38.806 Read completed with error (sct=0, sc=8) 00:08:38.806 Read completed with error (sct=0, sc=8) 00:08:38.806 Write completed with error (sct=0, sc=8) 00:08:38.806 Read completed with error (sct=0, sc=8) 00:08:38.806 Write completed with error (sct=0, sc=8) 00:08:38.806 Read completed with error (sct=0, sc=8) 00:08:38.806 Write completed with error (sct=0, sc=8) 00:08:38.806 Read completed with error (sct=0, sc=8) 00:08:38.806 Write completed with error (sct=0, sc=8) 00:08:38.806 Read completed with error (sct=0, sc=8) 00:08:38.806 Read completed with error (sct=0, sc=8) 00:08:38.806 Read completed with error (sct=0, sc=8) 00:08:38.806 Read completed with error (sct=0, sc=8) 00:08:38.806 Read completed with error (sct=0, sc=8) 00:08:38.806 Read completed with error (sct=0, sc=8) 00:08:38.806 Write completed with error (sct=0, sc=8) 00:08:38.806 Read completed with error (sct=0, sc=8) 00:08:38.806 Read completed with error (sct=0, sc=8) 00:08:38.806 Read completed with error (sct=0, sc=8) 00:08:38.806 Write completed with error (sct=0, sc=8) 00:08:38.806 Read completed with error (sct=0, sc=8) 00:08:38.806 Read completed with error (sct=0, sc=8) 00:08:38.806 Write completed with error (sct=0, sc=8) 00:08:38.806 Write completed with error (sct=0, sc=8) 00:08:38.806 Write completed with error (sct=0, sc=8) 00:08:38.806 Read completed with error (sct=0, sc=8) 00:08:38.806 Write completed with error (sct=0, sc=8) 00:08:38.806 Read completed with error (sct=0, sc=8) 00:08:38.806 Read completed with error (sct=0, sc=8) 00:08:38.806 Read completed with error (sct=0, sc=8) 00:08:38.806 Write completed with error (sct=0, sc=8) 00:08:38.806 Write completed with error (sct=0, sc=8) 00:08:38.806 Read completed with error (sct=0, sc=8) 00:08:38.806 Read completed with error (sct=0, sc=8) 00:08:38.806 Read completed with error (sct=0, sc=8) 00:08:38.806 Read completed with error (sct=0, sc=8) 00:08:38.806 Write completed with error (sct=0, sc=8) 00:08:38.806 Read completed with error (sct=0, sc=8) 00:08:38.806 Write completed with error (sct=0, sc=8) 00:08:38.806 Read completed with error (sct=0, sc=8) 00:08:38.806 Read completed with error (sct=0, sc=8) 00:08:38.806 Read completed with error (sct=0, sc=8) 00:08:38.806 Write completed with error (sct=0, sc=8) 00:08:38.806 Read completed with error (sct=0, sc=8) 00:08:38.806 Read completed with error (sct=0, sc=8) 00:08:38.806 Write completed with error (sct=0, sc=8) 00:08:38.806 Read completed with error (sct=0, sc=8) 00:08:38.806 Write completed with error (sct=0, sc=8) 00:08:38.806 Read completed with error (sct=0, sc=8) 00:08:38.806 Read completed with error (sct=0, sc=8) 00:08:38.806 Read completed with error (sct=0, sc=8) 00:08:38.806 Read completed with error (sct=0, sc=8) 00:08:38.806 Write completed with error (sct=0, sc=8) 00:08:38.806 Read completed with error (sct=0, sc=8) 00:08:38.806 Read completed with error (sct=0, sc=8) 00:08:38.806 Read completed with error (sct=0, sc=8) 00:08:38.806 Write completed with error (sct=0, sc=8) 00:08:38.806 Read completed with error (sct=0, sc=8) 00:08:38.806 Read completed with error (sct=0, sc=8) 00:08:38.806 Read completed with error (sct=0, sc=8) 00:08:38.806 Read completed with error (sct=0, sc=8) 00:08:38.806 Read completed with error (sct=0, sc=8) 00:08:38.806 Read completed with error (sct=0, sc=8) 00:08:38.806 Read completed with error (sct=0, sc=8) 00:08:38.806 Read completed with error (sct=0, sc=8) 00:08:38.806 Read completed with error (sct=0, sc=8) 00:08:38.806 Read completed with error (sct=0, sc=8) 00:08:38.806 Read completed with error (sct=0, sc=8) 00:08:38.806 Read completed with error (sct=0, sc=8) 00:08:38.806 Write completed with error (sct=0, sc=8) 00:08:39.740 [2024-07-15 11:27:17.014254] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1edd510 is same with the state(5) to be set 00:08:39.740 Read completed with error (sct=0, sc=8) 00:08:39.740 Write completed with error (sct=0, sc=8) 00:08:39.740 Read completed with error (sct=0, sc=8) 00:08:39.740 Read completed with error (sct=0, sc=8) 00:08:39.740 Read completed with error (sct=0, sc=8) 00:08:39.740 Write completed with error (sct=0, sc=8) 00:08:39.740 Read completed with error (sct=0, sc=8) 00:08:39.740 Read completed with error (sct=0, sc=8) 00:08:39.740 Write completed with error (sct=0, sc=8) 00:08:39.740 Write completed with error (sct=0, sc=8) 00:08:39.740 Read completed with error (sct=0, sc=8) 00:08:39.740 Read completed with error (sct=0, sc=8) 00:08:39.740 Read completed with error (sct=0, sc=8) 00:08:39.740 Write completed with error (sct=0, sc=8) 00:08:39.740 Read completed with error (sct=0, sc=8) 00:08:39.740 Read completed with error (sct=0, sc=8) 00:08:39.740 Read completed with error (sct=0, sc=8) 00:08:39.740 Read completed with error (sct=0, sc=8) 00:08:39.740 Write completed with error (sct=0, sc=8) 00:08:39.740 Write completed with error (sct=0, sc=8) 00:08:39.740 Read completed with error (sct=0, sc=8) 00:08:39.740 [2024-07-15 11:27:17.035525] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f00a80 is same with the state(5) to be set 00:08:39.740 Read completed with error (sct=0, sc=8) 00:08:39.740 Read completed with error (sct=0, sc=8) 00:08:39.740 Read completed with error (sct=0, sc=8) 00:08:39.740 Read completed with error (sct=0, sc=8) 00:08:39.740 Read completed with error (sct=0, sc=8) 00:08:39.740 Write completed with error (sct=0, sc=8) 00:08:39.740 Write completed with error (sct=0, sc=8) 00:08:39.740 Read completed with error (sct=0, sc=8) 00:08:39.740 Read completed with error (sct=0, sc=8) 00:08:39.740 Read completed with error (sct=0, sc=8) 00:08:39.740 Read completed with error (sct=0, sc=8) 00:08:39.740 Read completed with error (sct=0, sc=8) 00:08:39.740 Read completed with error (sct=0, sc=8) 00:08:39.740 Read completed with error (sct=0, sc=8) 00:08:39.740 Read completed with error (sct=0, sc=8) 00:08:39.740 Read completed with error (sct=0, sc=8) 00:08:39.740 Read completed with error (sct=0, sc=8) 00:08:39.740 Write completed with error (sct=0, sc=8) 00:08:39.740 Write completed with error (sct=0, sc=8) 00:08:39.740 Read completed with error (sct=0, sc=8) 00:08:39.740 [2024-07-15 11:27:17.035745] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1edd8d0 is same with the state(5) to be set 00:08:39.740 Read completed with error (sct=0, sc=8) 00:08:39.740 Write completed with error (sct=0, sc=8) 00:08:39.740 Read completed with error (sct=0, sc=8) 00:08:39.740 Read completed with error (sct=0, sc=8) 00:08:39.740 Read completed with error (sct=0, sc=8) 00:08:39.740 Read completed with error (sct=0, sc=8) 00:08:39.740 Read completed with error (sct=0, sc=8) 00:08:39.740 Read completed with error (sct=0, sc=8) 00:08:39.740 Read completed with error (sct=0, sc=8) 00:08:39.740 Read completed with error (sct=0, sc=8) 00:08:39.740 Read completed with error (sct=0, sc=8) 00:08:39.740 Write completed with error (sct=0, sc=8) 00:08:39.740 Read completed with error (sct=0, sc=8) 00:08:39.740 Read completed with error (sct=0, sc=8) 00:08:39.740 Read completed with error (sct=0, sc=8) 00:08:39.740 Read completed with error (sct=0, sc=8) 00:08:39.740 Read completed with error (sct=0, sc=8) 00:08:39.740 Read completed with error (sct=0, sc=8) 00:08:39.740 Read completed with error (sct=0, sc=8) 00:08:39.740 Read completed with error (sct=0, sc=8) 00:08:39.740 Write completed with error (sct=0, sc=8) 00:08:39.740 Read completed with error (sct=0, sc=8) 00:08:39.740 Read completed with error (sct=0, sc=8) 00:08:39.740 Write completed with error (sct=0, sc=8) 00:08:39.740 [2024-07-15 11:27:17.036696] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f656800cfe0 is same with the state(5) to be set 00:08:39.740 Read completed with error (sct=0, sc=8) 00:08:39.740 Read completed with error (sct=0, sc=8) 00:08:39.740 Write completed with error (sct=0, sc=8) 00:08:39.740 Read completed with error (sct=0, sc=8) 00:08:39.740 Read completed with error (sct=0, sc=8) 00:08:39.740 Read completed with error (sct=0, sc=8) 00:08:39.740 Read completed with error (sct=0, sc=8) 00:08:39.740 Read completed with error (sct=0, sc=8) 00:08:39.740 Read completed with error (sct=0, sc=8) 00:08:39.740 Write completed with error (sct=0, sc=8) 00:08:39.741 Write completed with error (sct=0, sc=8) 00:08:39.741 Read completed with error (sct=0, sc=8) 00:08:39.741 Read completed with error (sct=0, sc=8) 00:08:39.741 Read completed with error (sct=0, sc=8) 00:08:39.741 Read completed with error (sct=0, sc=8) 00:08:39.741 Write completed with error (sct=0, sc=8) 00:08:39.741 Read completed with error (sct=0, sc=8) 00:08:39.741 Read completed with error (sct=0, sc=8) 00:08:39.741 Read completed with error (sct=0, sc=8) 00:08:39.741 Read completed with error (sct=0, sc=8) 00:08:39.741 Read completed with error (sct=0, sc=8) 00:08:39.741 Read completed with error (sct=0, sc=8) 00:08:39.741 Read completed with error (sct=0, sc=8) 00:08:39.741 Read completed with error (sct=0, sc=8) 00:08:39.741 [2024-07-15 11:27:17.037731] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f656800d740 is same with the state(5) to be set 00:08:39.741 Initializing NVMe Controllers 00:08:39.741 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:08:39.741 Controller IO queue size 128, less than required. 00:08:39.741 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:08:39.741 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:08:39.741 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:08:39.741 Initialization complete. Launching workers. 00:08:39.741 ======================================================== 00:08:39.741 Latency(us) 00:08:39.741 Device Information : IOPS MiB/s Average min max 00:08:39.741 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 162.15 0.08 914416.22 389.96 1013939.77 00:08:39.741 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 169.09 0.08 897592.57 1811.64 1013671.65 00:08:39.741 ======================================================== 00:08:39.741 Total : 331.25 0.16 905828.10 389.96 1013939.77 00:08:39.741 00:08:39.741 [2024-07-15 11:27:17.038673] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1edd510 (9): Bad file descriptor 00:08:39.741 /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf: errors occurred 00:08:39.741 11:27:17 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:39.741 11:27:17 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:08:39.741 11:27:17 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 71736 00:08:39.741 11:27:17 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:08:40.306 11:27:17 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:08:40.306 11:27:17 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 71736 00:08:40.306 /home/vagrant/spdk_repo/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (71736) - No such process 00:08:40.306 11:27:17 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 71736 00:08:40.306 11:27:17 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@648 -- # local es=0 00:08:40.306 11:27:17 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@650 -- # valid_exec_arg wait 71736 00:08:40.306 11:27:17 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@636 -- # local arg=wait 00:08:40.306 11:27:17 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:40.306 11:27:17 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # type -t wait 00:08:40.306 11:27:17 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:40.306 11:27:17 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@651 -- # wait 71736 00:08:40.306 11:27:17 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@651 -- # es=1 00:08:40.306 11:27:17 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:08:40.306 11:27:17 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:08:40.306 11:27:17 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:08:40.306 11:27:17 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:08:40.306 11:27:17 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:40.306 11:27:17 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:40.306 11:27:17 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:40.306 11:27:17 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:40.306 11:27:17 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:40.306 11:27:17 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:40.306 [2024-07-15 11:27:17.567738] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:40.306 11:27:17 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:40.306 11:27:17 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:40.306 11:27:17 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:40.306 11:27:17 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:40.306 11:27:17 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:40.306 11:27:17 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=71787 00:08:40.307 11:27:17 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:08:40.307 11:27:17 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 71787 00:08:40.307 11:27:17 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:40.307 11:27:17 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:08:40.307 [2024-07-15 11:27:17.745861] subsystem.c:1568:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:08:40.873 11:27:18 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:40.873 11:27:18 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 71787 00:08:40.873 11:27:18 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:41.131 11:27:18 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:41.131 11:27:18 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 71787 00:08:41.131 11:27:18 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:41.697 11:27:19 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:41.697 11:27:19 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 71787 00:08:41.697 11:27:19 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:42.261 11:27:19 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:42.261 11:27:19 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 71787 00:08:42.261 11:27:19 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:42.881 11:27:20 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:42.881 11:27:20 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 71787 00:08:42.881 11:27:20 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:43.139 11:27:20 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:43.139 11:27:20 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 71787 00:08:43.139 11:27:20 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:43.397 Initializing NVMe Controllers 00:08:43.397 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:08:43.397 Controller IO queue size 128, less than required. 00:08:43.397 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:08:43.397 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:08:43.397 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:08:43.397 Initialization complete. Launching workers. 00:08:43.397 ======================================================== 00:08:43.397 Latency(us) 00:08:43.397 Device Information : IOPS MiB/s Average min max 00:08:43.397 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1003390.99 1000198.33 1011558.12 00:08:43.397 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1005502.97 1000519.28 1013069.20 00:08:43.397 ======================================================== 00:08:43.397 Total : 256.00 0.12 1004446.98 1000198.33 1013069.20 00:08:43.397 00:08:43.655 11:27:21 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:43.655 11:27:21 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 71787 00:08:43.655 /home/vagrant/spdk_repo/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (71787) - No such process 00:08:43.655 11:27:21 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 71787 00:08:43.655 11:27:21 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:08:43.655 11:27:21 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:08:43.655 11:27:21 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:43.655 11:27:21 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # sync 00:08:43.914 11:27:21 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:43.914 11:27:21 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@120 -- # set +e 00:08:43.914 11:27:21 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:43.914 11:27:21 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:43.914 rmmod nvme_tcp 00:08:43.914 rmmod nvme_fabrics 00:08:43.914 rmmod nvme_keyring 00:08:43.914 11:27:21 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:43.914 11:27:21 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set -e 00:08:43.914 11:27:21 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # return 0 00:08:43.914 11:27:21 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@489 -- # '[' -n 71704 ']' 00:08:43.914 11:27:21 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@490 -- # killprocess 71704 00:08:43.914 11:27:21 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@948 -- # '[' -z 71704 ']' 00:08:43.914 11:27:21 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@952 -- # kill -0 71704 00:08:43.914 11:27:21 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@953 -- # uname 00:08:43.914 11:27:21 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:43.914 11:27:21 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 71704 00:08:43.914 killing process with pid 71704 00:08:43.914 11:27:21 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:08:43.914 11:27:21 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:08:43.914 11:27:21 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@966 -- # echo 'killing process with pid 71704' 00:08:43.914 11:27:21 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@967 -- # kill 71704 00:08:43.914 11:27:21 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@972 -- # wait 71704 00:08:44.173 11:27:21 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:44.173 11:27:21 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:44.173 11:27:21 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:44.173 11:27:21 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:44.173 11:27:21 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:44.173 11:27:21 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:44.173 11:27:21 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:44.173 11:27:21 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:44.173 11:27:21 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:08:44.173 00:08:44.173 real 0m8.527s 00:08:44.173 user 0m27.006s 00:08:44.173 sys 0m1.499s 00:08:44.173 11:27:21 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:44.173 ************************************ 00:08:44.173 END TEST nvmf_delete_subsystem 00:08:44.173 ************************************ 00:08:44.173 11:27:21 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:44.173 11:27:21 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:08:44.173 11:27:21 nvmf_tcp -- nvmf/nvmf.sh@36 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:08:44.173 11:27:21 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:08:44.173 11:27:21 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:44.173 11:27:21 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:44.173 ************************************ 00:08:44.173 START TEST nvmf_ns_masking 00:08:44.173 ************************************ 00:08:44.173 11:27:21 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1123 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:08:44.173 * Looking for test storage... 00:08:44.173 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:44.173 11:27:21 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:44.173 11:27:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:08:44.173 11:27:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:44.173 11:27:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:44.173 11:27:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:44.173 11:27:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:44.173 11:27:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:44.173 11:27:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:44.173 11:27:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:44.173 11:27:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:44.173 11:27:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:44.173 11:27:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:44.173 11:27:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:891080d4-f96c-4735-b9e2-e3ce9892e421 00:08:44.173 11:27:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=891080d4-f96c-4735-b9e2-e3ce9892e421 00:08:44.173 11:27:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:44.173 11:27:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:44.173 11:27:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:08:44.173 11:27:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:44.173 11:27:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:44.174 11:27:21 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:44.174 11:27:21 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:44.174 11:27:21 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:44.174 11:27:21 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:44.174 11:27:21 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:44.174 11:27:21 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:44.174 11:27:21 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:08:44.174 11:27:21 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:44.174 11:27:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@47 -- # : 0 00:08:44.174 11:27:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:44.174 11:27:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:44.174 11:27:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:44.174 11:27:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:44.174 11:27:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:44.174 11:27:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:44.174 11:27:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:44.174 11:27:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:44.174 11:27:21 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:44.174 11:27:21 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@11 -- # hostsock=/var/tmp/host.sock 00:08:44.174 11:27:21 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@12 -- # loops=5 00:08:44.174 11:27:21 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@13 -- # uuidgen 00:08:44.174 11:27:21 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@13 -- # ns1uuid=c10d4237-a6e3-4372-9586-fe733ae3bcc0 00:08:44.174 11:27:21 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@14 -- # uuidgen 00:08:44.174 11:27:21 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@14 -- # ns2uuid=ec2c65d2-c80d-4ec4-931f-278a43b25281 00:08:44.174 11:27:21 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@16 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:08:44.174 11:27:21 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@17 -- # HOSTNQN1=nqn.2016-06.io.spdk:host1 00:08:44.174 11:27:21 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@18 -- # HOSTNQN2=nqn.2016-06.io.spdk:host2 00:08:44.174 11:27:21 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@19 -- # uuidgen 00:08:44.174 11:27:21 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@19 -- # HOSTID=378fdb81-da33-4a3d-9785-5d7124fbabfb 00:08:44.174 11:27:21 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@50 -- # nvmftestinit 00:08:44.174 11:27:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:44.174 11:27:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:44.174 11:27:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:44.174 11:27:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:44.174 11:27:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:44.174 11:27:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:44.174 11:27:21 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:44.174 11:27:21 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:44.174 11:27:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:08:44.174 11:27:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:08:44.174 11:27:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:08:44.174 11:27:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:08:44.174 11:27:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:08:44.174 11:27:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@432 -- # nvmf_veth_init 00:08:44.174 11:27:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:44.174 11:27:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:44.174 11:27:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:08:44.174 11:27:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:08:44.174 11:27:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:08:44.174 11:27:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:08:44.174 11:27:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:08:44.174 11:27:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:44.174 11:27:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:08:44.174 11:27:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:08:44.174 11:27:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:08:44.174 11:27:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:08:44.174 11:27:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:08:44.433 11:27:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:08:44.433 Cannot find device "nvmf_tgt_br" 00:08:44.433 11:27:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@155 -- # true 00:08:44.433 11:27:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:08:44.433 Cannot find device "nvmf_tgt_br2" 00:08:44.433 11:27:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@156 -- # true 00:08:44.433 11:27:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:08:44.433 11:27:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:08:44.433 Cannot find device "nvmf_tgt_br" 00:08:44.433 11:27:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@158 -- # true 00:08:44.433 11:27:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:08:44.433 Cannot find device "nvmf_tgt_br2" 00:08:44.433 11:27:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@159 -- # true 00:08:44.433 11:27:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:08:44.433 11:27:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:08:44.433 11:27:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:44.433 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:44.433 11:27:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@162 -- # true 00:08:44.433 11:27:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:44.433 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:44.433 11:27:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@163 -- # true 00:08:44.433 11:27:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:08:44.433 11:27:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:08:44.433 11:27:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:08:44.433 11:27:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:08:44.433 11:27:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:08:44.433 11:27:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:08:44.433 11:27:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:08:44.433 11:27:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:08:44.433 11:27:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:08:44.433 11:27:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:08:44.433 11:27:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:08:44.433 11:27:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:08:44.433 11:27:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:08:44.433 11:27:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:08:44.433 11:27:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:08:44.433 11:27:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:08:44.692 11:27:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:08:44.692 11:27:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:08:44.692 11:27:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:08:44.692 11:27:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:08:44.692 11:27:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:08:44.692 11:27:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:08:44.692 11:27:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:08:44.692 11:27:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:08:44.692 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:44.692 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.094 ms 00:08:44.692 00:08:44.692 --- 10.0.0.2 ping statistics --- 00:08:44.692 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:44.692 rtt min/avg/max/mdev = 0.094/0.094/0.094/0.000 ms 00:08:44.692 11:27:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:08:44.692 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:08:44.692 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.047 ms 00:08:44.692 00:08:44.692 --- 10.0.0.3 ping statistics --- 00:08:44.692 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:44.692 rtt min/avg/max/mdev = 0.047/0.047/0.047/0.000 ms 00:08:44.692 11:27:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:08:44.692 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:44.692 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.032 ms 00:08:44.692 00:08:44.692 --- 10.0.0.1 ping statistics --- 00:08:44.692 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:44.692 rtt min/avg/max/mdev = 0.032/0.032/0.032/0.000 ms 00:08:44.692 11:27:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:44.692 11:27:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@433 -- # return 0 00:08:44.692 11:27:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:44.692 11:27:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:44.692 11:27:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:44.692 11:27:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:44.692 11:27:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:44.692 11:27:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:44.692 11:27:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:44.692 11:27:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@51 -- # nvmfappstart 00:08:44.692 11:27:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:44.692 11:27:22 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:44.692 11:27:22 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:08:44.692 11:27:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@481 -- # nvmfpid=72026 00:08:44.692 11:27:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:08:44.692 11:27:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@482 -- # waitforlisten 72026 00:08:44.692 11:27:22 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@829 -- # '[' -z 72026 ']' 00:08:44.692 11:27:22 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:44.692 11:27:22 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:44.692 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:44.692 11:27:22 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:44.692 11:27:22 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:44.692 11:27:22 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:08:44.692 [2024-07-15 11:27:22.079215] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:08:44.692 [2024-07-15 11:27:22.079887] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:44.952 [2024-07-15 11:27:22.221917] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:44.952 [2024-07-15 11:27:22.290151] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:44.952 [2024-07-15 11:27:22.290214] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:44.952 [2024-07-15 11:27:22.290230] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:44.952 [2024-07-15 11:27:22.290240] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:44.952 [2024-07-15 11:27:22.290248] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:44.952 [2024-07-15 11:27:22.290284] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:45.889 11:27:23 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:45.889 11:27:23 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@862 -- # return 0 00:08:45.889 11:27:23 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:45.889 11:27:23 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:45.889 11:27:23 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:08:45.889 11:27:23 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:45.889 11:27:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:08:46.145 [2024-07-15 11:27:23.387116] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:46.145 11:27:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@55 -- # MALLOC_BDEV_SIZE=64 00:08:46.145 11:27:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@56 -- # MALLOC_BLOCK_SIZE=512 00:08:46.145 11:27:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:08:46.403 Malloc1 00:08:46.403 11:27:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:08:46.661 Malloc2 00:08:46.661 11:27:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:08:46.919 11:27:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:08:47.178 11:27:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:47.436 [2024-07-15 11:27:24.876713] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:47.436 11:27:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@67 -- # connect 00:08:47.436 11:27:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 378fdb81-da33-4a3d-9785-5d7124fbabfb -a 10.0.0.2 -s 4420 -i 4 00:08:47.694 11:27:25 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 00:08:47.694 11:27:25 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:08:47.694 11:27:25 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:08:47.694 11:27:25 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:08:47.694 11:27:25 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:08:49.621 11:27:27 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:08:49.621 11:27:27 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:08:49.621 11:27:27 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:08:49.621 11:27:27 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:08:49.621 11:27:27 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:08:49.621 11:27:27 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:08:49.621 11:27:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:08:49.621 11:27:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:08:49.621 11:27:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:08:49.621 11:27:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:08:49.621 11:27:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@68 -- # ns_is_visible 0x1 00:08:49.621 11:27:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:08:49.621 11:27:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:08:49.621 [ 0]:0x1 00:08:49.621 11:27:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:08:49.621 11:27:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:08:49.879 11:27:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=9bdad277716c48849e9e1be4a586441f 00:08:49.879 11:27:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 9bdad277716c48849e9e1be4a586441f != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:08:49.879 11:27:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:08:50.138 11:27:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@72 -- # ns_is_visible 0x1 00:08:50.138 11:27:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:08:50.138 11:27:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:08:50.138 [ 0]:0x1 00:08:50.138 11:27:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:08:50.138 11:27:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:08:50.138 11:27:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=9bdad277716c48849e9e1be4a586441f 00:08:50.138 11:27:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 9bdad277716c48849e9e1be4a586441f != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:08:50.138 11:27:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@73 -- # ns_is_visible 0x2 00:08:50.138 11:27:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:08:50.138 11:27:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:08:50.138 [ 1]:0x2 00:08:50.138 11:27:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:08:50.138 11:27:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:08:50.138 11:27:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=3fced014173248cca1cd7b4b99d5dc47 00:08:50.138 11:27:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 3fced014173248cca1cd7b4b99d5dc47 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:08:50.138 11:27:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@75 -- # disconnect 00:08:50.138 11:27:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:50.397 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:50.397 11:27:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:50.655 11:27:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:08:50.913 11:27:28 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@83 -- # connect 1 00:08:50.913 11:27:28 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 378fdb81-da33-4a3d-9785-5d7124fbabfb -a 10.0.0.2 -s 4420 -i 4 00:08:50.913 11:27:28 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 1 00:08:50.914 11:27:28 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:08:50.914 11:27:28 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:08:50.914 11:27:28 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n 1 ]] 00:08:50.914 11:27:28 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # nvme_device_counter=1 00:08:50.914 11:27:28 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:08:52.817 11:27:30 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:08:52.817 11:27:30 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:08:52.817 11:27:30 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:08:52.817 11:27:30 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:08:52.817 11:27:30 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:08:52.817 11:27:30 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:08:52.817 11:27:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:08:52.817 11:27:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:08:53.081 11:27:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:08:53.081 11:27:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:08:53.081 11:27:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@84 -- # NOT ns_is_visible 0x1 00:08:53.081 11:27:30 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:08:53.081 11:27:30 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:08:53.081 11:27:30 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:08:53.081 11:27:30 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:53.081 11:27:30 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:08:53.081 11:27:30 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:53.081 11:27:30 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:08:53.081 11:27:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:08:53.081 11:27:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:08:53.081 11:27:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:08:53.081 11:27:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:08:53.081 11:27:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:08:53.081 11:27:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:08:53.081 11:27:30 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:08:53.081 11:27:30 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:08:53.081 11:27:30 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:08:53.081 11:27:30 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:08:53.081 11:27:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@85 -- # ns_is_visible 0x2 00:08:53.081 11:27:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:08:53.081 11:27:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:08:53.081 [ 0]:0x2 00:08:53.081 11:27:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:08:53.081 11:27:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:08:53.081 11:27:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=3fced014173248cca1cd7b4b99d5dc47 00:08:53.081 11:27:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 3fced014173248cca1cd7b4b99d5dc47 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:08:53.081 11:27:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:08:53.376 11:27:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x1 00:08:53.376 11:27:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:08:53.376 11:27:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:08:53.376 [ 0]:0x1 00:08:53.376 11:27:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:08:53.376 11:27:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:08:53.376 11:27:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=9bdad277716c48849e9e1be4a586441f 00:08:53.376 11:27:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 9bdad277716c48849e9e1be4a586441f != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:08:53.376 11:27:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@90 -- # ns_is_visible 0x2 00:08:53.376 11:27:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:08:53.376 11:27:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:08:53.376 [ 1]:0x2 00:08:53.376 11:27:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:08:53.376 11:27:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:08:53.633 11:27:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=3fced014173248cca1cd7b4b99d5dc47 00:08:53.633 11:27:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 3fced014173248cca1cd7b4b99d5dc47 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:08:53.633 11:27:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:08:53.890 11:27:31 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@94 -- # NOT ns_is_visible 0x1 00:08:53.890 11:27:31 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:08:53.890 11:27:31 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:08:53.890 11:27:31 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:08:53.890 11:27:31 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:53.890 11:27:31 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:08:53.890 11:27:31 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:53.890 11:27:31 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:08:53.890 11:27:31 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:08:53.890 11:27:31 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:08:53.890 11:27:31 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:08:53.890 11:27:31 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:08:53.890 11:27:31 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:08:53.890 11:27:31 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:08:53.890 11:27:31 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:08:53.890 11:27:31 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:08:53.890 11:27:31 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:08:53.890 11:27:31 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:08:53.890 11:27:31 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@95 -- # ns_is_visible 0x2 00:08:53.890 11:27:31 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:08:53.890 11:27:31 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:08:53.890 [ 0]:0x2 00:08:53.890 11:27:31 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:08:53.890 11:27:31 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:08:53.890 11:27:31 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=3fced014173248cca1cd7b4b99d5dc47 00:08:53.890 11:27:31 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 3fced014173248cca1cd7b4b99d5dc47 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:08:53.890 11:27:31 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@97 -- # disconnect 00:08:53.890 11:27:31 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:53.890 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:53.890 11:27:31 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:08:54.147 11:27:31 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@101 -- # connect 2 00:08:54.147 11:27:31 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 378fdb81-da33-4a3d-9785-5d7124fbabfb -a 10.0.0.2 -s 4420 -i 4 00:08:54.403 11:27:31 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 2 00:08:54.403 11:27:31 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:08:54.403 11:27:31 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:08:54.403 11:27:31 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n 2 ]] 00:08:54.403 11:27:31 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # nvme_device_counter=2 00:08:54.403 11:27:31 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:08:56.300 11:27:33 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:08:56.300 11:27:33 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:08:56.300 11:27:33 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:08:56.300 11:27:33 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=2 00:08:56.300 11:27:33 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:08:56.300 11:27:33 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:08:56.300 11:27:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:08:56.300 11:27:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:08:56.300 11:27:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:08:56.300 11:27:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:08:56.300 11:27:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x1 00:08:56.300 11:27:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:08:56.300 11:27:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:08:56.300 [ 0]:0x1 00:08:56.300 11:27:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:08:56.300 11:27:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:08:56.300 11:27:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=9bdad277716c48849e9e1be4a586441f 00:08:56.300 11:27:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 9bdad277716c48849e9e1be4a586441f != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:08:56.300 11:27:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@103 -- # ns_is_visible 0x2 00:08:56.300 11:27:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:08:56.300 11:27:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:08:56.300 [ 1]:0x2 00:08:56.300 11:27:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:08:56.300 11:27:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:08:56.558 11:27:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=3fced014173248cca1cd7b4b99d5dc47 00:08:56.558 11:27:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 3fced014173248cca1cd7b4b99d5dc47 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:08:56.558 11:27:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@106 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:08:56.816 11:27:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@107 -- # NOT ns_is_visible 0x1 00:08:56.816 11:27:34 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:08:56.816 11:27:34 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:08:56.816 11:27:34 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:08:56.816 11:27:34 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:56.816 11:27:34 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:08:56.816 11:27:34 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:56.816 11:27:34 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:08:56.816 11:27:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:08:56.816 11:27:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:08:56.816 11:27:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:08:56.816 11:27:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:08:56.816 11:27:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:08:56.816 11:27:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:08:56.816 11:27:34 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:08:56.816 11:27:34 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:08:56.816 11:27:34 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:08:56.816 11:27:34 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:08:56.816 11:27:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@108 -- # ns_is_visible 0x2 00:08:56.816 11:27:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:08:56.816 11:27:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:08:56.816 [ 0]:0x2 00:08:56.816 11:27:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:08:56.816 11:27:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:08:56.816 11:27:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=3fced014173248cca1cd7b4b99d5dc47 00:08:56.816 11:27:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 3fced014173248cca1cd7b4b99d5dc47 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:08:56.816 11:27:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@111 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:08:56.816 11:27:34 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:08:56.816 11:27:34 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:08:56.816 11:27:34 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:56.816 11:27:34 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:56.816 11:27:34 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:56.816 11:27:34 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:56.816 11:27:34 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:56.816 11:27:34 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:56.816 11:27:34 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:56.816 11:27:34 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:08:56.816 11:27:34 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:08:57.073 [2024-07-15 11:27:34.434912] nvmf_rpc.c:1791:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:08:57.073 2024/07/15 11:27:34 error on JSON-RPC call, method: nvmf_ns_remove_host, params: map[host:nqn.2016-06.io.spdk:host1 nqn:nqn.2016-06.io.spdk:cnode1 nsid:2], err: error received for nvmf_ns_remove_host method, err: Code=-32602 Msg=Invalid parameters 00:08:57.073 request: 00:08:57.073 { 00:08:57.073 "method": "nvmf_ns_remove_host", 00:08:57.073 "params": { 00:08:57.073 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:08:57.073 "nsid": 2, 00:08:57.073 "host": "nqn.2016-06.io.spdk:host1" 00:08:57.073 } 00:08:57.073 } 00:08:57.073 Got JSON-RPC error response 00:08:57.073 GoRPCClient: error on JSON-RPC call 00:08:57.073 11:27:34 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:08:57.073 11:27:34 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:08:57.073 11:27:34 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:08:57.073 11:27:34 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:08:57.073 11:27:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@112 -- # NOT ns_is_visible 0x1 00:08:57.073 11:27:34 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:08:57.073 11:27:34 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:08:57.073 11:27:34 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:08:57.073 11:27:34 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:57.073 11:27:34 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:08:57.073 11:27:34 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:57.073 11:27:34 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:08:57.073 11:27:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:08:57.073 11:27:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:08:57.073 11:27:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:08:57.073 11:27:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:08:57.073 11:27:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:08:57.073 11:27:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:08:57.073 11:27:34 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:08:57.073 11:27:34 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:08:57.073 11:27:34 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:08:57.073 11:27:34 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:08:57.073 11:27:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@113 -- # ns_is_visible 0x2 00:08:57.073 11:27:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:08:57.073 11:27:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:08:57.074 [ 0]:0x2 00:08:57.074 11:27:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:08:57.074 11:27:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:08:57.331 11:27:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=3fced014173248cca1cd7b4b99d5dc47 00:08:57.331 11:27:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 3fced014173248cca1cd7b4b99d5dc47 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:08:57.331 11:27:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@114 -- # disconnect 00:08:57.331 11:27:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:57.331 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:57.331 11:27:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@118 -- # hostpid=72409 00:08:57.331 11:27:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@117 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -r /var/tmp/host.sock -m 2 00:08:57.331 11:27:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@119 -- # trap 'killprocess $hostpid; nvmftestfini' SIGINT SIGTERM EXIT 00:08:57.331 11:27:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@121 -- # waitforlisten 72409 /var/tmp/host.sock 00:08:57.331 11:27:34 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@829 -- # '[' -z 72409 ']' 00:08:57.331 11:27:34 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/host.sock 00:08:57.331 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:08:57.331 11:27:34 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:57.331 11:27:34 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:08:57.331 11:27:34 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:57.331 11:27:34 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:08:57.331 [2024-07-15 11:27:34.680953] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:08:57.331 [2024-07-15 11:27:34.681073] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72409 ] 00:08:57.589 [2024-07-15 11:27:34.823414] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:57.589 [2024-07-15 11:27:34.906642] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:58.521 11:27:35 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:58.521 11:27:35 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@862 -- # return 0 00:08:58.521 11:27:35 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@122 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:58.521 11:27:35 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@123 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:58.778 11:27:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@124 -- # uuid2nguid c10d4237-a6e3-4372-9586-fe733ae3bcc0 00:08:58.778 11:27:36 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@759 -- # tr -d - 00:08:58.778 11:27:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@124 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g C10D4237A6E343729586FE733AE3BCC0 -i 00:08:59.036 11:27:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@125 -- # uuid2nguid ec2c65d2-c80d-4ec4-931f-278a43b25281 00:08:59.036 11:27:36 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@759 -- # tr -d - 00:08:59.036 11:27:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@125 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 -g EC2C65D2C80D4EC4931F278A43B25281 -i 00:08:59.295 11:27:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:08:59.553 11:27:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host2 00:08:59.812 11:27:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@129 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:08:59.812 11:27:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:09:00.070 nvme0n1 00:09:00.070 11:27:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@131 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:09:00.071 11:27:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:09:00.329 nvme1n2 00:09:00.587 11:27:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # hostrpc bdev_get_bdevs 00:09:00.587 11:27:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # jq -r '.[].name' 00:09:00.587 11:27:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:09:00.587 11:27:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # sort 00:09:00.587 11:27:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # xargs 00:09:00.847 11:27:38 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # [[ nvme0n1 nvme1n2 == \n\v\m\e\0\n\1\ \n\v\m\e\1\n\2 ]] 00:09:00.847 11:27:38 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@135 -- # hostrpc bdev_get_bdevs -b nvme0n1 00:09:00.847 11:27:38 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme0n1 00:09:00.847 11:27:38 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@135 -- # jq -r '.[].uuid' 00:09:01.106 11:27:38 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@135 -- # [[ c10d4237-a6e3-4372-9586-fe733ae3bcc0 == \c\1\0\d\4\2\3\7\-\a\6\e\3\-\4\3\7\2\-\9\5\8\6\-\f\e\7\3\3\a\e\3\b\c\c\0 ]] 00:09:01.106 11:27:38 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@136 -- # jq -r '.[].uuid' 00:09:01.106 11:27:38 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@136 -- # hostrpc bdev_get_bdevs -b nvme1n2 00:09:01.106 11:27:38 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme1n2 00:09:01.366 11:27:38 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@136 -- # [[ ec2c65d2-c80d-4ec4-931f-278a43b25281 == \e\c\2\c\6\5\d\2\-\c\8\0\d\-\4\e\c\4\-\9\3\1\f\-\2\7\8\a\4\3\b\2\5\2\8\1 ]] 00:09:01.366 11:27:38 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@138 -- # killprocess 72409 00:09:01.366 11:27:38 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@948 -- # '[' -z 72409 ']' 00:09:01.366 11:27:38 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@952 -- # kill -0 72409 00:09:01.366 11:27:38 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@953 -- # uname 00:09:01.366 11:27:38 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:01.366 11:27:38 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 72409 00:09:01.366 11:27:38 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:09:01.366 killing process with pid 72409 00:09:01.366 11:27:38 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:09:01.366 11:27:38 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@966 -- # echo 'killing process with pid 72409' 00:09:01.366 11:27:38 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@967 -- # kill 72409 00:09:01.366 11:27:38 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@972 -- # wait 72409 00:09:01.624 11:27:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@139 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:01.883 11:27:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@141 -- # trap - SIGINT SIGTERM EXIT 00:09:01.883 11:27:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@142 -- # nvmftestfini 00:09:01.883 11:27:39 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:01.883 11:27:39 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@117 -- # sync 00:09:01.883 11:27:39 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:01.883 11:27:39 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@120 -- # set +e 00:09:01.883 11:27:39 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:01.883 11:27:39 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:01.883 rmmod nvme_tcp 00:09:02.142 rmmod nvme_fabrics 00:09:02.142 rmmod nvme_keyring 00:09:02.142 11:27:39 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:02.142 11:27:39 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@124 -- # set -e 00:09:02.142 11:27:39 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@125 -- # return 0 00:09:02.142 11:27:39 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@489 -- # '[' -n 72026 ']' 00:09:02.142 11:27:39 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@490 -- # killprocess 72026 00:09:02.142 11:27:39 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@948 -- # '[' -z 72026 ']' 00:09:02.142 11:27:39 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@952 -- # kill -0 72026 00:09:02.142 11:27:39 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@953 -- # uname 00:09:02.142 11:27:39 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:02.142 11:27:39 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 72026 00:09:02.142 killing process with pid 72026 00:09:02.142 11:27:39 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:09:02.142 11:27:39 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:09:02.142 11:27:39 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@966 -- # echo 'killing process with pid 72026' 00:09:02.142 11:27:39 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@967 -- # kill 72026 00:09:02.142 11:27:39 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@972 -- # wait 72026 00:09:02.142 11:27:39 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:02.142 11:27:39 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:02.142 11:27:39 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:02.142 11:27:39 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:02.142 11:27:39 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:02.142 11:27:39 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:02.142 11:27:39 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:02.142 11:27:39 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:02.402 11:27:39 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:09:02.402 00:09:02.402 real 0m18.134s 00:09:02.402 user 0m29.108s 00:09:02.402 sys 0m2.581s 00:09:02.402 11:27:39 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:02.402 11:27:39 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:09:02.402 ************************************ 00:09:02.402 END TEST nvmf_ns_masking 00:09:02.402 ************************************ 00:09:02.402 11:27:39 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:09:02.402 11:27:39 nvmf_tcp -- nvmf/nvmf.sh@37 -- # [[ 0 -eq 1 ]] 00:09:02.402 11:27:39 nvmf_tcp -- nvmf/nvmf.sh@40 -- # [[ 0 -eq 1 ]] 00:09:02.402 11:27:39 nvmf_tcp -- nvmf/nvmf.sh@47 -- # run_test nvmf_host_management /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:09:02.402 11:27:39 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:09:02.402 11:27:39 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:02.402 11:27:39 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:02.402 ************************************ 00:09:02.402 START TEST nvmf_host_management 00:09:02.402 ************************************ 00:09:02.402 11:27:39 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:09:02.402 * Looking for test storage... 00:09:02.402 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:02.402 11:27:39 nvmf_tcp.nvmf_host_management -- target/host_management.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:02.402 11:27:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:09:02.402 11:27:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:02.402 11:27:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:02.402 11:27:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:02.402 11:27:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:02.402 11:27:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:02.402 11:27:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:02.402 11:27:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:02.402 11:27:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:02.402 11:27:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:02.402 11:27:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:02.402 11:27:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:891080d4-f96c-4735-b9e2-e3ce9892e421 00:09:02.402 11:27:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=891080d4-f96c-4735-b9e2-e3ce9892e421 00:09:02.402 11:27:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:02.402 11:27:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:02.402 11:27:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:02.402 11:27:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:02.402 11:27:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:02.402 11:27:39 nvmf_tcp.nvmf_host_management -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:02.402 11:27:39 nvmf_tcp.nvmf_host_management -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:02.402 11:27:39 nvmf_tcp.nvmf_host_management -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:02.402 11:27:39 nvmf_tcp.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:02.402 11:27:39 nvmf_tcp.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:02.402 11:27:39 nvmf_tcp.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:02.402 11:27:39 nvmf_tcp.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:09:02.402 11:27:39 nvmf_tcp.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:02.402 11:27:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@47 -- # : 0 00:09:02.402 11:27:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:02.402 11:27:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:02.402 11:27:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:02.402 11:27:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:02.402 11:27:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:02.402 11:27:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:02.402 11:27:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:02.402 11:27:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:02.402 11:27:39 nvmf_tcp.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:02.402 11:27:39 nvmf_tcp.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:02.403 11:27:39 nvmf_tcp.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:09:02.403 11:27:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:02.403 11:27:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:02.403 11:27:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:02.403 11:27:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:02.403 11:27:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:02.403 11:27:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:02.403 11:27:39 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:02.403 11:27:39 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:02.403 11:27:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:09:02.403 11:27:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:09:02.403 11:27:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:09:02.403 11:27:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:09:02.403 11:27:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:09:02.403 11:27:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@432 -- # nvmf_veth_init 00:09:02.403 11:27:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:02.403 11:27:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:02.403 11:27:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:09:02.403 11:27:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:09:02.403 11:27:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:09:02.403 11:27:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:09:02.403 11:27:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:09:02.403 11:27:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:02.403 11:27:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:09:02.403 11:27:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:09:02.403 11:27:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:09:02.403 11:27:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:09:02.403 11:27:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:09:02.403 11:27:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:09:02.403 Cannot find device "nvmf_tgt_br" 00:09:02.403 11:27:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@155 -- # true 00:09:02.403 11:27:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:09:02.403 Cannot find device "nvmf_tgt_br2" 00:09:02.403 11:27:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@156 -- # true 00:09:02.403 11:27:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:09:02.403 11:27:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:09:02.403 Cannot find device "nvmf_tgt_br" 00:09:02.403 11:27:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@158 -- # true 00:09:02.403 11:27:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:09:02.403 Cannot find device "nvmf_tgt_br2" 00:09:02.403 11:27:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@159 -- # true 00:09:02.403 11:27:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:09:02.662 11:27:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:09:02.662 11:27:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:02.662 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:02.662 11:27:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@162 -- # true 00:09:02.662 11:27:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:02.662 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:02.662 11:27:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@163 -- # true 00:09:02.662 11:27:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:09:02.662 11:27:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:09:02.662 11:27:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:09:02.662 11:27:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:09:02.662 11:27:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:09:02.662 11:27:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:09:02.662 11:27:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:09:02.662 11:27:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:09:02.662 11:27:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:09:02.662 11:27:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:09:02.662 11:27:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:09:02.662 11:27:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:09:02.662 11:27:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:09:02.662 11:27:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:09:02.662 11:27:40 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:09:02.662 11:27:40 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:09:02.662 11:27:40 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:09:02.662 11:27:40 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:09:02.662 11:27:40 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:09:02.662 11:27:40 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:09:02.662 11:27:40 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:09:02.662 11:27:40 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:09:02.662 11:27:40 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:09:02.662 11:27:40 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:09:02.662 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:02.662 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.071 ms 00:09:02.662 00:09:02.662 --- 10.0.0.2 ping statistics --- 00:09:02.662 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:02.662 rtt min/avg/max/mdev = 0.071/0.071/0.071/0.000 ms 00:09:02.662 11:27:40 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:09:02.662 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:09:02.662 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.058 ms 00:09:02.662 00:09:02.662 --- 10.0.0.3 ping statistics --- 00:09:02.662 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:02.662 rtt min/avg/max/mdev = 0.058/0.058/0.058/0.000 ms 00:09:02.662 11:27:40 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:09:02.662 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:02.662 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.026 ms 00:09:02.662 00:09:02.662 --- 10.0.0.1 ping statistics --- 00:09:02.662 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:02.662 rtt min/avg/max/mdev = 0.026/0.026/0.026/0.000 ms 00:09:02.662 11:27:40 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:02.662 11:27:40 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@433 -- # return 0 00:09:02.662 11:27:40 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:02.662 11:27:40 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:02.662 11:27:40 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:02.662 11:27:40 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:02.662 11:27:40 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:02.662 11:27:40 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:02.662 11:27:40 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:02.662 11:27:40 nvmf_tcp.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:09:02.662 11:27:40 nvmf_tcp.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:09:02.662 11:27:40 nvmf_tcp.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:09:02.662 11:27:40 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:02.662 11:27:40 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:02.662 11:27:40 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:02.662 11:27:40 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@481 -- # nvmfpid=72764 00:09:02.662 11:27:40 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@482 -- # waitforlisten 72764 00:09:02.662 11:27:40 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:09:02.662 11:27:40 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@829 -- # '[' -z 72764 ']' 00:09:02.662 11:27:40 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:02.662 11:27:40 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:02.662 11:27:40 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:02.662 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:02.662 11:27:40 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:02.662 11:27:40 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:02.921 [2024-07-15 11:27:40.184130] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:09:02.921 [2024-07-15 11:27:40.184215] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:02.921 [2024-07-15 11:27:40.322393] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:03.179 [2024-07-15 11:27:40.409976] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:03.179 [2024-07-15 11:27:40.410068] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:03.179 [2024-07-15 11:27:40.410090] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:03.179 [2024-07-15 11:27:40.410106] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:03.179 [2024-07-15 11:27:40.410120] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:03.179 [2024-07-15 11:27:40.410767] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:09:03.179 [2024-07-15 11:27:40.410954] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:09:03.179 [2024-07-15 11:27:40.411070] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:09:03.179 [2024-07-15 11:27:40.411099] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:03.745 11:27:41 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:03.745 11:27:41 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@862 -- # return 0 00:09:03.745 11:27:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:03.745 11:27:41 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:03.745 11:27:41 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:04.003 11:27:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:04.003 11:27:41 nvmf_tcp.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:04.003 11:27:41 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:04.003 11:27:41 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:04.003 [2024-07-15 11:27:41.236905] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:04.003 11:27:41 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:04.003 11:27:41 nvmf_tcp.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:09:04.003 11:27:41 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:04.003 11:27:41 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:04.003 11:27:41 nvmf_tcp.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:09:04.003 11:27:41 nvmf_tcp.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:09:04.003 11:27:41 nvmf_tcp.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:09:04.003 11:27:41 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:04.003 11:27:41 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:04.003 Malloc0 00:09:04.003 [2024-07-15 11:27:41.308514] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:04.003 11:27:41 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:04.003 11:27:41 nvmf_tcp.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:09:04.003 11:27:41 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:04.003 11:27:41 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:04.003 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:09:04.003 11:27:41 nvmf_tcp.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=72837 00:09:04.003 11:27:41 nvmf_tcp.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 72837 /var/tmp/bdevperf.sock 00:09:04.003 11:27:41 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@829 -- # '[' -z 72837 ']' 00:09:04.003 11:27:41 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:09:04.003 11:27:41 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:04.003 11:27:41 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:09:04.003 11:27:41 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:04.003 11:27:41 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:04.003 11:27:41 nvmf_tcp.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:09:04.003 11:27:41 nvmf_tcp.nvmf_host_management -- target/host_management.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:09:04.003 11:27:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:09:04.003 11:27:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:09:04.003 11:27:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:09:04.003 11:27:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:09:04.003 { 00:09:04.003 "params": { 00:09:04.003 "name": "Nvme$subsystem", 00:09:04.003 "trtype": "$TEST_TRANSPORT", 00:09:04.003 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:04.003 "adrfam": "ipv4", 00:09:04.003 "trsvcid": "$NVMF_PORT", 00:09:04.003 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:04.003 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:04.003 "hdgst": ${hdgst:-false}, 00:09:04.003 "ddgst": ${ddgst:-false} 00:09:04.003 }, 00:09:04.003 "method": "bdev_nvme_attach_controller" 00:09:04.003 } 00:09:04.003 EOF 00:09:04.003 )") 00:09:04.003 11:27:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:09:04.003 11:27:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:09:04.003 11:27:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:09:04.003 11:27:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:09:04.003 "params": { 00:09:04.003 "name": "Nvme0", 00:09:04.003 "trtype": "tcp", 00:09:04.003 "traddr": "10.0.0.2", 00:09:04.003 "adrfam": "ipv4", 00:09:04.003 "trsvcid": "4420", 00:09:04.003 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:09:04.003 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:09:04.003 "hdgst": false, 00:09:04.003 "ddgst": false 00:09:04.003 }, 00:09:04.003 "method": "bdev_nvme_attach_controller" 00:09:04.003 }' 00:09:04.003 [2024-07-15 11:27:41.402952] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:09:04.003 [2024-07-15 11:27:41.403055] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72837 ] 00:09:04.262 [2024-07-15 11:27:41.534318] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:04.262 [2024-07-15 11:27:41.603694] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:04.520 Running I/O for 10 seconds... 00:09:04.520 11:27:41 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:04.520 11:27:41 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@862 -- # return 0 00:09:04.520 11:27:41 nvmf_tcp.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:09:04.520 11:27:41 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:04.520 11:27:41 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:04.520 11:27:41 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:04.520 11:27:41 nvmf_tcp.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:09:04.520 11:27:41 nvmf_tcp.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:09:04.520 11:27:41 nvmf_tcp.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:09:04.520 11:27:41 nvmf_tcp.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:09:04.520 11:27:41 nvmf_tcp.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:09:04.520 11:27:41 nvmf_tcp.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:09:04.520 11:27:41 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:09:04.520 11:27:41 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:09:04.520 11:27:41 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:09:04.520 11:27:41 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:09:04.520 11:27:41 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:04.520 11:27:41 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:04.520 11:27:41 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:04.520 11:27:41 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=67 00:09:04.520 11:27:41 nvmf_tcp.nvmf_host_management -- target/host_management.sh@58 -- # '[' 67 -ge 100 ']' 00:09:04.520 11:27:41 nvmf_tcp.nvmf_host_management -- target/host_management.sh@62 -- # sleep 0.25 00:09:04.781 11:27:42 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i-- )) 00:09:04.781 11:27:42 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:09:04.781 11:27:42 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:09:04.781 11:27:42 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:04.781 11:27:42 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:09:04.781 11:27:42 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:04.781 11:27:42 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:04.781 11:27:42 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=515 00:09:04.781 11:27:42 nvmf_tcp.nvmf_host_management -- target/host_management.sh@58 -- # '[' 515 -ge 100 ']' 00:09:04.781 11:27:42 nvmf_tcp.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:09:04.781 11:27:42 nvmf_tcp.nvmf_host_management -- target/host_management.sh@60 -- # break 00:09:04.781 11:27:42 nvmf_tcp.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:09:04.781 11:27:42 nvmf_tcp.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:09:04.781 11:27:42 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:04.781 11:27:42 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:04.781 [2024-07-15 11:27:42.202294] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcb5310 is same with the state(5) to be set 00:09:04.781 [2024-07-15 11:27:42.202371] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcb5310 is same with the state(5) to be set 00:09:04.781 [2024-07-15 11:27:42.202391] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcb5310 is same with the state(5) to be set 00:09:04.781 [2024-07-15 11:27:42.202406] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcb5310 is same with the state(5) to be set 00:09:04.781 [2024-07-15 11:27:42.202420] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcb5310 is same with the state(5) to be set 00:09:04.781 [2024-07-15 11:27:42.202434] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcb5310 is same with the state(5) to be set 00:09:04.781 [2024-07-15 11:27:42.202448] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcb5310 is same with the state(5) to be set 00:09:04.781 [2024-07-15 11:27:42.202462] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcb5310 is same with the state(5) to be set 00:09:04.781 [2024-07-15 11:27:42.202475] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcb5310 is same with the state(5) to be set 00:09:04.781 [2024-07-15 11:27:42.202489] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcb5310 is same with the state(5) to be set 00:09:04.781 [2024-07-15 11:27:42.202502] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcb5310 is same with the state(5) to be set 00:09:04.781 [2024-07-15 11:27:42.202516] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcb5310 is same with the state(5) to be set 00:09:04.781 [2024-07-15 11:27:42.202530] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcb5310 is same with the state(5) to be set 00:09:04.781 [2024-07-15 11:27:42.202563] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcb5310 is same with the state(5) to be set 00:09:04.781 [2024-07-15 11:27:42.202580] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcb5310 is same with the state(5) to be set 00:09:04.781 [2024-07-15 11:27:42.202594] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcb5310 is same with the state(5) to be set 00:09:04.781 [2024-07-15 11:27:42.202607] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcb5310 is same with the state(5) to be set 00:09:04.781 [2024-07-15 11:27:42.202621] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcb5310 is same with the state(5) to be set 00:09:04.781 [2024-07-15 11:27:42.202636] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcb5310 is same with the state(5) to be set 00:09:04.781 [2024-07-15 11:27:42.202651] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcb5310 is same with the state(5) to be set 00:09:04.781 [2024-07-15 11:27:42.202664] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcb5310 is same with the state(5) to be set 00:09:04.781 [2024-07-15 11:27:42.202678] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcb5310 is same with the state(5) to be set 00:09:04.781 [2024-07-15 11:27:42.202691] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcb5310 is same with the state(5) to be set 00:09:04.781 [2024-07-15 11:27:42.202705] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcb5310 is same with the state(5) to be set 00:09:04.781 [2024-07-15 11:27:42.202718] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcb5310 is same with the state(5) to be set 00:09:04.781 [2024-07-15 11:27:42.202732] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcb5310 is same with the state(5) to be set 00:09:04.781 [2024-07-15 11:27:42.202745] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcb5310 is same with the state(5) to be set 00:09:04.781 [2024-07-15 11:27:42.202758] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcb5310 is same with the state(5) to be set 00:09:04.781 [2024-07-15 11:27:42.202772] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcb5310 is same with the state(5) to be set 00:09:04.781 [2024-07-15 11:27:42.202786] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcb5310 is same with the state(5) to be set 00:09:04.781 [2024-07-15 11:27:42.202799] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcb5310 is same with the state(5) to be set 00:09:04.781 [2024-07-15 11:27:42.202813] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcb5310 is same with the state(5) to be set 00:09:04.781 [2024-07-15 11:27:42.202834] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcb5310 is same with the state(5) to be set 00:09:04.781 [2024-07-15 11:27:42.202849] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcb5310 is same with the state(5) to be set 00:09:04.781 [2024-07-15 11:27:42.202863] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcb5310 is same with the state(5) to be set 00:09:04.781 [2024-07-15 11:27:42.202877] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcb5310 is same with the state(5) to be set 00:09:04.781 [2024-07-15 11:27:42.202891] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcb5310 is same with the state(5) to be set 00:09:04.781 [2024-07-15 11:27:42.202905] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcb5310 is same with the state(5) to be set 00:09:04.781 [2024-07-15 11:27:42.202918] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcb5310 is same with the state(5) to be set 00:09:04.781 [2024-07-15 11:27:42.202932] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcb5310 is same with the state(5) to be set 00:09:04.781 [2024-07-15 11:27:42.202946] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcb5310 is same with the state(5) to be set 00:09:04.781 [2024-07-15 11:27:42.202959] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcb5310 is same with the state(5) to be set 00:09:04.781 [2024-07-15 11:27:42.202972] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcb5310 is same with the state(5) to be set 00:09:04.781 [2024-07-15 11:27:42.202986] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcb5310 is same with the state(5) to be set 00:09:04.781 [2024-07-15 11:27:42.202999] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcb5310 is same with the state(5) to be set 00:09:04.781 [2024-07-15 11:27:42.203013] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcb5310 is same with the state(5) to be set 00:09:04.781 [2024-07-15 11:27:42.203027] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcb5310 is same with the state(5) to be set 00:09:04.781 [2024-07-15 11:27:42.203040] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcb5310 is same with the state(5) to be set 00:09:04.781 [2024-07-15 11:27:42.203054] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcb5310 is same with the state(5) to be set 00:09:04.781 [2024-07-15 11:27:42.203068] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcb5310 is same with the state(5) to be set 00:09:04.781 [2024-07-15 11:27:42.203082] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcb5310 is same with the state(5) to be set 00:09:04.781 [2024-07-15 11:27:42.203095] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcb5310 is same with the state(5) to be set 00:09:04.781 [2024-07-15 11:27:42.203109] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcb5310 is same with the state(5) to be set 00:09:04.781 [2024-07-15 11:27:42.203123] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcb5310 is same with the state(5) to be set 00:09:04.781 [2024-07-15 11:27:42.203136] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcb5310 is same with the state(5) to be set 00:09:04.781 [2024-07-15 11:27:42.203150] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcb5310 is same with the state(5) to be set 00:09:04.781 [2024-07-15 11:27:42.203163] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcb5310 is same with the state(5) to be set 00:09:04.781 [2024-07-15 11:27:42.203176] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcb5310 is same with the state(5) to be set 00:09:04.781 [2024-07-15 11:27:42.203189] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcb5310 is same with the state(5) to be set 00:09:04.781 [2024-07-15 11:27:42.203202] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcb5310 is same with the state(5) to be set 00:09:04.781 [2024-07-15 11:27:42.203216] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcb5310 is same with the state(5) to be set 00:09:04.781 [2024-07-15 11:27:42.203230] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcb5310 is same with the state(5) to be set 00:09:04.781 11:27:42 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:04.781 11:27:42 nvmf_tcp.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:09:04.781 11:27:42 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:04.781 11:27:42 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:04.781 [2024-07-15 11:27:42.208633] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:09:04.781 [2024-07-15 11:27:42.208673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:04.781 [2024-07-15 11:27:42.208689] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:09:04.781 [2024-07-15 11:27:42.208699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:04.781 [2024-07-15 11:27:42.208709] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:09:04.781 [2024-07-15 11:27:42.208719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:04.781 [2024-07-15 11:27:42.208729] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:09:04.781 [2024-07-15 11:27:42.208738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:04.781 [2024-07-15 11:27:42.208748] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e70af0 is same with the state(5) to be set 00:09:04.781 11:27:42 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:04.781 11:27:42 nvmf_tcp.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:09:04.781 task offset: 81792 on job bdev=Nvme0n1 fails 00:09:04.781 00:09:04.781 Latency(us) 00:09:04.781 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:04.781 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:09:04.782 Job: Nvme0n1 ended in about 0.48 seconds with error 00:09:04.782 Verification LBA range: start 0x0 length 0x400 00:09:04.782 Nvme0n1 : 0.48 1342.19 83.89 134.43 0.00 41738.14 1936.29 42419.67 00:09:04.782 =================================================================================================================== 00:09:04.782 Total : 1342.19 83.89 134.43 0.00 41738.14 1936.29 42419.67 00:09:04.782 [2024-07-15 11:27:42.226226] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e70af0 (9): Bad file descriptor 00:09:04.782 [2024-07-15 11:27:42.226354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:81792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:04.782 [2024-07-15 11:27:42.226372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:04.782 [2024-07-15 11:27:42.226393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:04.782 [2024-07-15 11:27:42.226403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:04.782 [2024-07-15 11:27:42.226416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:82048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:04.782 [2024-07-15 11:27:42.226425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:04.782 [2024-07-15 11:27:42.226437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:82176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:04.782 [2024-07-15 11:27:42.226446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:04.782 [2024-07-15 11:27:42.226457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:82304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:04.782 [2024-07-15 11:27:42.226467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:04.782 [2024-07-15 11:27:42.226478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:82432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:04.782 [2024-07-15 11:27:42.226488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:04.782 [2024-07-15 11:27:42.226499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:82560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:04.782 [2024-07-15 11:27:42.226508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:04.782 [2024-07-15 11:27:42.226520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:82688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:04.782 [2024-07-15 11:27:42.226530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:04.782 [2024-07-15 11:27:42.226541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:82816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:04.782 [2024-07-15 11:27:42.226565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:04.782 [2024-07-15 11:27:42.226578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:82944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:04.782 [2024-07-15 11:27:42.226588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:04.782 [2024-07-15 11:27:42.226600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:83072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:04.782 [2024-07-15 11:27:42.226610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:04.782 [2024-07-15 11:27:42.226621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:83200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:04.782 [2024-07-15 11:27:42.226637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:04.782 [2024-07-15 11:27:42.226649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:83328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:04.782 [2024-07-15 11:27:42.226659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:04.782 [2024-07-15 11:27:42.226670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:83456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:04.782 [2024-07-15 11:27:42.226680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:04.782 [2024-07-15 11:27:42.226692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:83584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:04.782 [2024-07-15 11:27:42.226701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:04.782 [2024-07-15 11:27:42.226712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:83712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:04.782 [2024-07-15 11:27:42.226722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:04.782 [2024-07-15 11:27:42.226733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:83840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:04.782 [2024-07-15 11:27:42.226742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:04.782 [2024-07-15 11:27:42.226754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:83968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:04.782 [2024-07-15 11:27:42.226764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:04.782 [2024-07-15 11:27:42.226776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:84096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:04.782 [2024-07-15 11:27:42.226785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:04.782 [2024-07-15 11:27:42.226797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:84224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:04.782 [2024-07-15 11:27:42.226807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:04.782 [2024-07-15 11:27:42.226818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:84352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:04.782 [2024-07-15 11:27:42.226828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:04.782 [2024-07-15 11:27:42.226839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:84480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:04.782 [2024-07-15 11:27:42.226849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:04.782 [2024-07-15 11:27:42.226860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:84608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:04.782 [2024-07-15 11:27:42.226869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:04.782 [2024-07-15 11:27:42.226881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:84736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:04.782 [2024-07-15 11:27:42.226890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:04.782 [2024-07-15 11:27:42.226902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:84864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:04.782 [2024-07-15 11:27:42.226912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:04.782 [2024-07-15 11:27:42.226923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:84992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:04.782 [2024-07-15 11:27:42.226933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:04.782 [2024-07-15 11:27:42.226944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:85120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:04.782 [2024-07-15 11:27:42.226953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:04.782 [2024-07-15 11:27:42.226965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:85248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:04.782 [2024-07-15 11:27:42.226977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:04.782 [2024-07-15 11:27:42.226988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:85376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:04.782 [2024-07-15 11:27:42.226998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:04.782 [2024-07-15 11:27:42.227010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:85504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:04.782 [2024-07-15 11:27:42.227019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:04.782 [2024-07-15 11:27:42.227030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:85632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:04.782 [2024-07-15 11:27:42.227040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:04.782 [2024-07-15 11:27:42.227051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:85760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:04.782 [2024-07-15 11:27:42.227060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:04.782 [2024-07-15 11:27:42.227071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:85888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:04.782 [2024-07-15 11:27:42.227081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:04.782 [2024-07-15 11:27:42.227092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:86016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:04.782 [2024-07-15 11:27:42.227102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:04.782 [2024-07-15 11:27:42.227114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:86144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:04.782 [2024-07-15 11:27:42.227123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:04.782 [2024-07-15 11:27:42.227135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:86272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:04.782 [2024-07-15 11:27:42.227144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:04.782 [2024-07-15 11:27:42.227155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:86400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:04.782 [2024-07-15 11:27:42.227165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:04.782 [2024-07-15 11:27:42.227177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:86528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:04.782 [2024-07-15 11:27:42.227186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:04.782 [2024-07-15 11:27:42.227198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:86656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:04.782 [2024-07-15 11:27:42.227207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:04.782 [2024-07-15 11:27:42.227218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:86784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:04.782 [2024-07-15 11:27:42.227228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:04.782 [2024-07-15 11:27:42.227239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:86912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:04.783 [2024-07-15 11:27:42.227249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:04.783 [2024-07-15 11:27:42.227261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:87040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:04.783 [2024-07-15 11:27:42.227270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:04.783 [2024-07-15 11:27:42.227281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:87168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:04.783 [2024-07-15 11:27:42.227292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:04.783 [2024-07-15 11:27:42.227304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:87296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:04.783 [2024-07-15 11:27:42.227315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:04.783 [2024-07-15 11:27:42.227327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:87424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:04.783 [2024-07-15 11:27:42.227336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:04.783 [2024-07-15 11:27:42.227348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:87552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:04.783 [2024-07-15 11:27:42.227357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:04.783 [2024-07-15 11:27:42.227368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:87680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:04.783 [2024-07-15 11:27:42.227378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:04.783 [2024-07-15 11:27:42.227389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:87808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:04.783 [2024-07-15 11:27:42.227399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:04.783 [2024-07-15 11:27:42.227410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:87936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:04.783 [2024-07-15 11:27:42.227419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:04.783 [2024-07-15 11:27:42.227430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:88064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:04.783 [2024-07-15 11:27:42.227441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:04.783 [2024-07-15 11:27:42.227452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:88192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:04.783 [2024-07-15 11:27:42.227461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:04.783 [2024-07-15 11:27:42.227472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:88320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:04.783 [2024-07-15 11:27:42.227482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:04.783 [2024-07-15 11:27:42.227493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:88448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:04.783 [2024-07-15 11:27:42.227503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:04.783 [2024-07-15 11:27:42.227514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:88576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:04.783 [2024-07-15 11:27:42.227523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:04.783 [2024-07-15 11:27:42.227534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:88704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:04.783 [2024-07-15 11:27:42.227554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:04.783 [2024-07-15 11:27:42.227568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:88832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:04.783 [2024-07-15 11:27:42.227578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:04.783 [2024-07-15 11:27:42.227589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:88960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:04.783 [2024-07-15 11:27:42.227598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:04.783 [2024-07-15 11:27:42.227610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:89088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:04.783 [2024-07-15 11:27:42.227619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:04.783 [2024-07-15 11:27:42.227630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:89216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:04.783 [2024-07-15 11:27:42.227640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:04.783 [2024-07-15 11:27:42.227652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:89344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:04.783 [2024-07-15 11:27:42.227679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:04.783 [2024-07-15 11:27:42.227692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:89472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:04.783 [2024-07-15 11:27:42.227703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:04.783 [2024-07-15 11:27:42.227714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:89600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:04.783 [2024-07-15 11:27:42.227724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:04.783 [2024-07-15 11:27:42.227736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:89728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:04.783 [2024-07-15 11:27:42.227745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:04.783 [2024-07-15 11:27:42.227757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:89856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:04.783 [2024-07-15 11:27:42.227766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:04.783 [2024-07-15 11:27:42.227834] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1e70820 was disconnected and freed. reset controller. 00:09:04.783 [2024-07-15 11:27:42.228998] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:09:04.783 [2024-07-15 11:27:42.231062] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:09:04.783 [2024-07-15 11:27:42.240905] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:09:06.159 11:27:43 nvmf_tcp.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 72837 00:09:06.159 /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh: line 91: kill: (72837) - No such process 00:09:06.159 11:27:43 nvmf_tcp.nvmf_host_management -- target/host_management.sh@91 -- # true 00:09:06.159 11:27:43 nvmf_tcp.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:09:06.159 11:27:43 nvmf_tcp.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:09:06.159 11:27:43 nvmf_tcp.nvmf_host_management -- target/host_management.sh@100 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:09:06.159 11:27:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:09:06.159 11:27:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:09:06.159 11:27:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:09:06.159 11:27:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:09:06.159 { 00:09:06.159 "params": { 00:09:06.159 "name": "Nvme$subsystem", 00:09:06.159 "trtype": "$TEST_TRANSPORT", 00:09:06.159 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:06.159 "adrfam": "ipv4", 00:09:06.159 "trsvcid": "$NVMF_PORT", 00:09:06.159 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:06.159 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:06.159 "hdgst": ${hdgst:-false}, 00:09:06.159 "ddgst": ${ddgst:-false} 00:09:06.159 }, 00:09:06.159 "method": "bdev_nvme_attach_controller" 00:09:06.159 } 00:09:06.159 EOF 00:09:06.159 )") 00:09:06.159 11:27:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:09:06.159 11:27:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:09:06.159 11:27:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:09:06.159 11:27:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:09:06.159 "params": { 00:09:06.159 "name": "Nvme0", 00:09:06.159 "trtype": "tcp", 00:09:06.159 "traddr": "10.0.0.2", 00:09:06.159 "adrfam": "ipv4", 00:09:06.159 "trsvcid": "4420", 00:09:06.159 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:09:06.159 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:09:06.159 "hdgst": false, 00:09:06.159 "ddgst": false 00:09:06.159 }, 00:09:06.159 "method": "bdev_nvme_attach_controller" 00:09:06.159 }' 00:09:06.159 [2024-07-15 11:27:43.291174] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:09:06.159 [2024-07-15 11:27:43.291287] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72884 ] 00:09:06.159 [2024-07-15 11:27:43.426754] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:06.159 [2024-07-15 11:27:43.510603] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:06.416 Running I/O for 1 seconds... 00:09:07.348 00:09:07.348 Latency(us) 00:09:07.348 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:07.348 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:09:07.348 Verification LBA range: start 0x0 length 0x400 00:09:07.348 Nvme0n1 : 1.03 1421.32 88.83 0.00 0.00 44107.99 5302.46 43372.92 00:09:07.348 =================================================================================================================== 00:09:07.348 Total : 1421.32 88.83 0.00 0.00 44107.99 5302.46 43372.92 00:09:07.606 11:27:44 nvmf_tcp.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:09:07.606 11:27:44 nvmf_tcp.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:09:07.606 11:27:44 nvmf_tcp.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevperf.conf 00:09:07.606 11:27:44 nvmf_tcp.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:09:07.606 11:27:44 nvmf_tcp.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:09:07.606 11:27:44 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:07.606 11:27:44 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@117 -- # sync 00:09:07.606 11:27:44 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:07.606 11:27:44 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@120 -- # set +e 00:09:07.606 11:27:44 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:07.606 11:27:44 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:07.606 rmmod nvme_tcp 00:09:07.606 rmmod nvme_fabrics 00:09:07.606 rmmod nvme_keyring 00:09:07.606 11:27:44 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:07.606 11:27:44 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@124 -- # set -e 00:09:07.606 11:27:44 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@125 -- # return 0 00:09:07.606 11:27:44 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@489 -- # '[' -n 72764 ']' 00:09:07.606 11:27:44 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@490 -- # killprocess 72764 00:09:07.606 11:27:44 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@948 -- # '[' -z 72764 ']' 00:09:07.606 11:27:44 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@952 -- # kill -0 72764 00:09:07.606 11:27:44 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@953 -- # uname 00:09:07.606 11:27:44 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:07.606 11:27:44 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 72764 00:09:07.606 killing process with pid 72764 00:09:07.606 11:27:44 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:09:07.606 11:27:44 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:09:07.606 11:27:44 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@966 -- # echo 'killing process with pid 72764' 00:09:07.606 11:27:44 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@967 -- # kill 72764 00:09:07.606 11:27:44 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@972 -- # wait 72764 00:09:07.863 [2024-07-15 11:27:45.113346] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:09:07.863 11:27:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:07.863 11:27:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:07.863 11:27:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:07.863 11:27:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:07.863 11:27:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:07.863 11:27:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:07.863 11:27:45 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:07.863 11:27:45 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:07.863 11:27:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:09:07.863 11:27:45 nvmf_tcp.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:09:07.863 00:09:07.863 real 0m5.498s 00:09:07.863 user 0m21.402s 00:09:07.863 sys 0m1.136s 00:09:07.863 11:27:45 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:07.863 11:27:45 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:07.863 ************************************ 00:09:07.863 END TEST nvmf_host_management 00:09:07.863 ************************************ 00:09:07.863 11:27:45 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:09:07.863 11:27:45 nvmf_tcp -- nvmf/nvmf.sh@48 -- # run_test nvmf_lvol /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:09:07.863 11:27:45 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:09:07.863 11:27:45 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:07.863 11:27:45 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:07.863 ************************************ 00:09:07.863 START TEST nvmf_lvol 00:09:07.864 ************************************ 00:09:07.864 11:27:45 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:09:07.864 * Looking for test storage... 00:09:07.864 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:07.864 11:27:45 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:07.864 11:27:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:09:07.864 11:27:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:07.864 11:27:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:07.864 11:27:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:07.864 11:27:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:07.864 11:27:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:07.864 11:27:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:07.864 11:27:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:07.864 11:27:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:07.864 11:27:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:07.864 11:27:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:07.864 11:27:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:891080d4-f96c-4735-b9e2-e3ce9892e421 00:09:07.864 11:27:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=891080d4-f96c-4735-b9e2-e3ce9892e421 00:09:07.864 11:27:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:07.864 11:27:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:07.864 11:27:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:07.864 11:27:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:07.864 11:27:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:07.864 11:27:45 nvmf_tcp.nvmf_lvol -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:07.864 11:27:45 nvmf_tcp.nvmf_lvol -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:07.864 11:27:45 nvmf_tcp.nvmf_lvol -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:07.864 11:27:45 nvmf_tcp.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:07.864 11:27:45 nvmf_tcp.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:07.864 11:27:45 nvmf_tcp.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:07.864 11:27:45 nvmf_tcp.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:09:07.864 11:27:45 nvmf_tcp.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:07.864 11:27:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@47 -- # : 0 00:09:07.864 11:27:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:07.864 11:27:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:07.864 11:27:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:07.864 11:27:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:07.864 11:27:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:07.864 11:27:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:07.864 11:27:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:07.864 11:27:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:07.864 11:27:45 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:07.864 11:27:45 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:07.864 11:27:45 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:09:07.864 11:27:45 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:09:07.864 11:27:45 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:07.864 11:27:45 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:09:07.864 11:27:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:07.864 11:27:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:07.864 11:27:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:07.864 11:27:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:07.864 11:27:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:07.864 11:27:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:07.864 11:27:45 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:07.864 11:27:45 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:08.122 11:27:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:09:08.122 11:27:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:09:08.122 11:27:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:09:08.122 11:27:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:09:08.122 11:27:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:09:08.122 11:27:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@432 -- # nvmf_veth_init 00:09:08.122 11:27:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:08.122 11:27:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:08.122 11:27:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:09:08.122 11:27:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:09:08.122 11:27:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:09:08.122 11:27:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:09:08.122 11:27:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:09:08.122 11:27:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:08.122 11:27:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:09:08.122 11:27:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:09:08.122 11:27:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:09:08.122 11:27:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:09:08.122 11:27:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:09:08.122 11:27:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:09:08.122 Cannot find device "nvmf_tgt_br" 00:09:08.122 11:27:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@155 -- # true 00:09:08.122 11:27:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:09:08.122 Cannot find device "nvmf_tgt_br2" 00:09:08.122 11:27:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@156 -- # true 00:09:08.122 11:27:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:09:08.122 11:27:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:09:08.122 Cannot find device "nvmf_tgt_br" 00:09:08.122 11:27:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@158 -- # true 00:09:08.122 11:27:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:09:08.122 Cannot find device "nvmf_tgt_br2" 00:09:08.122 11:27:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@159 -- # true 00:09:08.122 11:27:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:09:08.122 11:27:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:09:08.122 11:27:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:08.122 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:08.122 11:27:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@162 -- # true 00:09:08.122 11:27:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:08.122 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:08.122 11:27:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@163 -- # true 00:09:08.122 11:27:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:09:08.122 11:27:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:09:08.122 11:27:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:09:08.122 11:27:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:09:08.122 11:27:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:09:08.122 11:27:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:09:08.122 11:27:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:09:08.122 11:27:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:09:08.122 11:27:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:09:08.122 11:27:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:09:08.122 11:27:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:09:08.122 11:27:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:09:08.122 11:27:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:09:08.122 11:27:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:09:08.122 11:27:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:09:08.122 11:27:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:09:08.122 11:27:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:09:08.122 11:27:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:09:08.122 11:27:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:09:08.122 11:27:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:09:08.381 11:27:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:09:08.381 11:27:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:09:08.381 11:27:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:09:08.381 11:27:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:09:08.381 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:08.381 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.071 ms 00:09:08.381 00:09:08.381 --- 10.0.0.2 ping statistics --- 00:09:08.381 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:08.381 rtt min/avg/max/mdev = 0.071/0.071/0.071/0.000 ms 00:09:08.381 11:27:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:09:08.381 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:09:08.381 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.043 ms 00:09:08.381 00:09:08.381 --- 10.0.0.3 ping statistics --- 00:09:08.381 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:08.381 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:09:08.381 11:27:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:09:08.381 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:08.381 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.030 ms 00:09:08.381 00:09:08.381 --- 10.0.0.1 ping statistics --- 00:09:08.381 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:08.381 rtt min/avg/max/mdev = 0.030/0.030/0.030/0.000 ms 00:09:08.381 11:27:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:08.381 11:27:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@433 -- # return 0 00:09:08.381 11:27:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:08.381 11:27:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:08.381 11:27:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:08.381 11:27:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:08.381 11:27:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:08.381 11:27:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:08.381 11:27:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:08.381 11:27:45 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:09:08.381 11:27:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:08.381 11:27:45 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:08.381 11:27:45 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:09:08.381 11:27:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@481 -- # nvmfpid=73097 00:09:08.381 11:27:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@482 -- # waitforlisten 73097 00:09:08.381 11:27:45 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@829 -- # '[' -z 73097 ']' 00:09:08.381 11:27:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:09:08.381 11:27:45 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:08.381 11:27:45 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:08.381 11:27:45 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:08.381 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:08.381 11:27:45 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:08.381 11:27:45 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:09:08.381 [2024-07-15 11:27:45.728977] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:09:08.381 [2024-07-15 11:27:45.729073] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:08.639 [2024-07-15 11:27:45.867198] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:08.639 [2024-07-15 11:27:45.925834] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:08.639 [2024-07-15 11:27:45.925892] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:08.639 [2024-07-15 11:27:45.925903] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:08.639 [2024-07-15 11:27:45.925912] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:08.639 [2024-07-15 11:27:45.925919] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:08.639 [2024-07-15 11:27:45.926648] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:08.639 [2024-07-15 11:27:45.926731] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:09:08.639 [2024-07-15 11:27:45.926737] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:09.264 11:27:46 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:09.264 11:27:46 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@862 -- # return 0 00:09:09.264 11:27:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:09.264 11:27:46 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:09.264 11:27:46 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:09:09.264 11:27:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:09.264 11:27:46 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:09:09.521 [2024-07-15 11:27:46.995291] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:09.778 11:27:47 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:10.035 11:27:47 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:09:10.035 11:27:47 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:10.291 11:27:47 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:09:10.291 11:27:47 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:09:10.548 11:27:47 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:09:10.806 11:27:48 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=9c7432a4-ea93-4395-9c50-6a913aee18e6 00:09:10.806 11:27:48 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 9c7432a4-ea93-4395-9c50-6a913aee18e6 lvol 20 00:09:11.064 11:27:48 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=8c8be851-43f4-4b92-afb7-5b5ab8e0f15e 00:09:11.064 11:27:48 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:09:11.340 11:27:48 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 8c8be851-43f4-4b92-afb7-5b5ab8e0f15e 00:09:11.597 11:27:48 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:09:11.597 [2024-07-15 11:27:49.047910] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:11.597 11:27:49 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:11.854 11:27:49 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:09:11.854 11:27:49 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=73250 00:09:11.854 11:27:49 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:09:13.229 11:27:50 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_snapshot 8c8be851-43f4-4b92-afb7-5b5ab8e0f15e MY_SNAPSHOT 00:09:13.229 11:27:50 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=5289a706-d85f-4a7c-bdfc-0480f7e6c90d 00:09:13.229 11:27:50 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_resize 8c8be851-43f4-4b92-afb7-5b5ab8e0f15e 30 00:09:13.795 11:27:50 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_clone 5289a706-d85f-4a7c-bdfc-0480f7e6c90d MY_CLONE 00:09:14.053 11:27:51 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=21c94704-ea27-48e1-b320-c3ed0c314a6f 00:09:14.053 11:27:51 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_inflate 21c94704-ea27-48e1-b320-c3ed0c314a6f 00:09:14.617 11:27:51 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 73250 00:09:22.716 Initializing NVMe Controllers 00:09:22.716 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:09:22.716 Controller IO queue size 128, less than required. 00:09:22.716 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:09:22.716 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:09:22.716 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:09:22.716 Initialization complete. Launching workers. 00:09:22.716 ======================================================== 00:09:22.716 Latency(us) 00:09:22.716 Device Information : IOPS MiB/s Average min max 00:09:22.716 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 10363.10 40.48 12352.21 550.34 54653.10 00:09:22.716 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 10277.10 40.14 12458.58 2954.89 44040.52 00:09:22.716 ======================================================== 00:09:22.716 Total : 20640.20 80.63 12405.17 550.34 54653.10 00:09:22.716 00:09:22.716 11:27:59 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:09:22.716 11:27:59 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 8c8be851-43f4-4b92-afb7-5b5ab8e0f15e 00:09:22.716 11:28:00 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 9c7432a4-ea93-4395-9c50-6a913aee18e6 00:09:23.283 11:28:00 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:09:23.283 11:28:00 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:09:23.283 11:28:00 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:09:23.283 11:28:00 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:23.283 11:28:00 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@117 -- # sync 00:09:23.283 11:28:00 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:23.283 11:28:00 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@120 -- # set +e 00:09:23.283 11:28:00 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:23.283 11:28:00 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:23.283 rmmod nvme_tcp 00:09:23.283 rmmod nvme_fabrics 00:09:23.283 rmmod nvme_keyring 00:09:23.283 11:28:00 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:23.283 11:28:00 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@124 -- # set -e 00:09:23.283 11:28:00 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@125 -- # return 0 00:09:23.283 11:28:00 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@489 -- # '[' -n 73097 ']' 00:09:23.283 11:28:00 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@490 -- # killprocess 73097 00:09:23.283 11:28:00 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@948 -- # '[' -z 73097 ']' 00:09:23.283 11:28:00 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@952 -- # kill -0 73097 00:09:23.283 11:28:00 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@953 -- # uname 00:09:23.283 11:28:00 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:23.283 11:28:00 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 73097 00:09:23.283 killing process with pid 73097 00:09:23.283 11:28:00 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:09:23.283 11:28:00 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:09:23.283 11:28:00 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@966 -- # echo 'killing process with pid 73097' 00:09:23.283 11:28:00 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@967 -- # kill 73097 00:09:23.283 11:28:00 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@972 -- # wait 73097 00:09:23.541 11:28:00 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:23.541 11:28:00 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:23.541 11:28:00 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:23.541 11:28:00 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:23.541 11:28:00 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:23.541 11:28:00 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:23.541 11:28:00 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:23.541 11:28:00 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:23.542 11:28:00 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:09:23.542 ************************************ 00:09:23.542 END TEST nvmf_lvol 00:09:23.542 ************************************ 00:09:23.542 00:09:23.542 real 0m15.592s 00:09:23.542 user 1m5.664s 00:09:23.542 sys 0m3.793s 00:09:23.542 11:28:00 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:23.542 11:28:00 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:09:23.542 11:28:00 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:09:23.542 11:28:00 nvmf_tcp -- nvmf/nvmf.sh@49 -- # run_test nvmf_lvs_grow /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:09:23.542 11:28:00 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:09:23.542 11:28:00 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:23.542 11:28:00 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:23.542 ************************************ 00:09:23.542 START TEST nvmf_lvs_grow 00:09:23.542 ************************************ 00:09:23.542 11:28:00 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:09:23.542 * Looking for test storage... 00:09:23.542 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:23.542 11:28:00 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:23.542 11:28:00 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:09:23.542 11:28:00 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:23.542 11:28:00 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:23.542 11:28:00 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:23.542 11:28:00 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:23.542 11:28:00 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:23.542 11:28:00 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:23.542 11:28:00 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:23.542 11:28:00 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:23.542 11:28:00 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:23.542 11:28:00 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:23.542 11:28:00 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:891080d4-f96c-4735-b9e2-e3ce9892e421 00:09:23.542 11:28:00 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=891080d4-f96c-4735-b9e2-e3ce9892e421 00:09:23.542 11:28:00 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:23.542 11:28:00 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:23.542 11:28:00 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:23.542 11:28:00 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:23.542 11:28:00 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:23.542 11:28:00 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:23.542 11:28:00 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:23.542 11:28:00 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:23.542 11:28:00 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:23.542 11:28:00 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:23.542 11:28:00 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:23.542 11:28:00 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:09:23.542 11:28:00 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:23.542 11:28:00 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@47 -- # : 0 00:09:23.542 11:28:00 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:23.542 11:28:00 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:23.542 11:28:00 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:23.542 11:28:00 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:23.542 11:28:00 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:23.542 11:28:00 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:23.542 11:28:00 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:23.542 11:28:00 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:23.542 11:28:00 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:23.542 11:28:00 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:09:23.542 11:28:00 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:09:23.542 11:28:00 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:23.542 11:28:00 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:23.542 11:28:00 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:23.542 11:28:00 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:23.542 11:28:00 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:23.542 11:28:00 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:23.542 11:28:00 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:23.542 11:28:00 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:23.542 11:28:00 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:09:23.542 11:28:00 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:09:23.542 11:28:00 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:09:23.542 11:28:00 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:09:23.542 11:28:00 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:09:23.542 11:28:00 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@432 -- # nvmf_veth_init 00:09:23.542 11:28:00 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:23.542 11:28:00 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:23.542 11:28:00 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:09:23.542 11:28:00 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:09:23.542 11:28:00 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:09:23.542 11:28:00 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:09:23.542 11:28:00 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:09:23.542 11:28:00 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:23.542 11:28:00 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:09:23.542 11:28:00 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:09:23.542 11:28:00 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:09:23.542 11:28:00 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:09:23.542 11:28:00 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:09:23.542 11:28:01 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:09:23.799 Cannot find device "nvmf_tgt_br" 00:09:23.799 11:28:01 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@155 -- # true 00:09:23.799 11:28:01 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:09:23.799 Cannot find device "nvmf_tgt_br2" 00:09:23.799 11:28:01 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@156 -- # true 00:09:23.799 11:28:01 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:09:23.799 11:28:01 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:09:23.799 Cannot find device "nvmf_tgt_br" 00:09:23.799 11:28:01 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@158 -- # true 00:09:23.799 11:28:01 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:09:23.799 Cannot find device "nvmf_tgt_br2" 00:09:23.799 11:28:01 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@159 -- # true 00:09:23.799 11:28:01 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:09:23.799 11:28:01 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:09:23.799 11:28:01 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:23.799 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:23.799 11:28:01 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@162 -- # true 00:09:23.799 11:28:01 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:23.799 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:23.799 11:28:01 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@163 -- # true 00:09:23.799 11:28:01 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:09:23.799 11:28:01 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:09:23.799 11:28:01 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:09:23.799 11:28:01 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:09:23.799 11:28:01 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:09:23.799 11:28:01 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:09:23.799 11:28:01 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:09:23.799 11:28:01 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:09:23.799 11:28:01 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:09:23.799 11:28:01 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:09:23.799 11:28:01 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:09:23.799 11:28:01 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:09:23.799 11:28:01 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:09:23.799 11:28:01 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:09:23.799 11:28:01 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:09:24.057 11:28:01 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:09:24.057 11:28:01 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:09:24.057 11:28:01 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:09:24.057 11:28:01 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:09:24.057 11:28:01 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:09:24.057 11:28:01 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:09:24.057 11:28:01 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:09:24.057 11:28:01 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:09:24.057 11:28:01 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:09:24.057 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:24.057 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.093 ms 00:09:24.057 00:09:24.057 --- 10.0.0.2 ping statistics --- 00:09:24.057 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:24.057 rtt min/avg/max/mdev = 0.093/0.093/0.093/0.000 ms 00:09:24.057 11:28:01 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:09:24.057 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:09:24.057 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.054 ms 00:09:24.057 00:09:24.057 --- 10.0.0.3 ping statistics --- 00:09:24.057 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:24.057 rtt min/avg/max/mdev = 0.054/0.054/0.054/0.000 ms 00:09:24.057 11:28:01 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:09:24.057 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:24.057 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.036 ms 00:09:24.057 00:09:24.057 --- 10.0.0.1 ping statistics --- 00:09:24.057 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:24.057 rtt min/avg/max/mdev = 0.036/0.036/0.036/0.000 ms 00:09:24.057 11:28:01 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:24.057 11:28:01 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@433 -- # return 0 00:09:24.057 11:28:01 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:24.057 11:28:01 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:24.057 11:28:01 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:24.057 11:28:01 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:24.057 11:28:01 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:24.057 11:28:01 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:24.057 11:28:01 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:24.057 11:28:01 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:09:24.057 11:28:01 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:24.057 11:28:01 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:24.057 11:28:01 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:24.057 11:28:01 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:09:24.057 11:28:01 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@481 -- # nvmfpid=73612 00:09:24.057 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:24.057 11:28:01 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@482 -- # waitforlisten 73612 00:09:24.057 11:28:01 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@829 -- # '[' -z 73612 ']' 00:09:24.057 11:28:01 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:24.057 11:28:01 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:24.057 11:28:01 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:24.057 11:28:01 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:24.057 11:28:01 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:24.057 [2024-07-15 11:28:01.423321] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:09:24.057 [2024-07-15 11:28:01.423418] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:24.315 [2024-07-15 11:28:01.556182] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:24.315 [2024-07-15 11:28:01.614527] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:24.315 [2024-07-15 11:28:01.614593] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:24.315 [2024-07-15 11:28:01.614605] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:24.315 [2024-07-15 11:28:01.614614] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:24.315 [2024-07-15 11:28:01.614621] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:24.315 [2024-07-15 11:28:01.614649] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:25.250 11:28:02 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:25.250 11:28:02 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@862 -- # return 0 00:09:25.250 11:28:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:25.250 11:28:02 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:25.250 11:28:02 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:25.250 11:28:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:25.250 11:28:02 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:09:25.250 [2024-07-15 11:28:02.652975] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:25.251 11:28:02 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:09:25.251 11:28:02 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:09:25.251 11:28:02 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:25.251 11:28:02 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:25.251 ************************************ 00:09:25.251 START TEST lvs_grow_clean 00:09:25.251 ************************************ 00:09:25.251 11:28:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1123 -- # lvs_grow 00:09:25.251 11:28:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:09:25.251 11:28:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:09:25.251 11:28:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:09:25.251 11:28:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:09:25.251 11:28:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:09:25.251 11:28:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:09:25.251 11:28:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:09:25.251 11:28:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:09:25.251 11:28:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:25.818 11:28:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:09:25.818 11:28:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:09:25.818 11:28:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=4bf652e3-09bd-40a0-aadd-375da32b5554 00:09:25.818 11:28:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4bf652e3-09bd-40a0-aadd-375da32b5554 00:09:25.818 11:28:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:09:26.382 11:28:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:09:26.382 11:28:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:09:26.382 11:28:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 4bf652e3-09bd-40a0-aadd-375da32b5554 lvol 150 00:09:26.382 11:28:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=73a287c2-ba81-427b-a6b3-034ef790b63c 00:09:26.382 11:28:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:09:26.382 11:28:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:09:26.639 [2024-07-15 11:28:04.059515] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:09:26.639 [2024-07-15 11:28:04.059612] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:09:26.639 true 00:09:26.639 11:28:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:09:26.639 11:28:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4bf652e3-09bd-40a0-aadd-375da32b5554 00:09:26.897 11:28:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:09:26.897 11:28:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:09:27.155 11:28:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 73a287c2-ba81-427b-a6b3-034ef790b63c 00:09:27.413 11:28:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:09:27.670 [2024-07-15 11:28:05.144200] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:27.931 11:28:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:28.188 11:28:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:09:28.188 11:28:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=73774 00:09:28.188 11:28:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:09:28.188 11:28:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 73774 /var/tmp/bdevperf.sock 00:09:28.188 11:28:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@829 -- # '[' -z 73774 ']' 00:09:28.188 11:28:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:09:28.188 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:09:28.188 11:28:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:28.188 11:28:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:09:28.188 11:28:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:28.188 11:28:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:09:28.188 [2024-07-15 11:28:05.495860] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:09:28.188 [2024-07-15 11:28:05.495950] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73774 ] 00:09:28.188 [2024-07-15 11:28:05.632519] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:28.446 [2024-07-15 11:28:05.696600] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:29.380 11:28:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:29.380 11:28:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@862 -- # return 0 00:09:29.380 11:28:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:09:29.380 Nvme0n1 00:09:29.380 11:28:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:09:29.640 [ 00:09:29.640 { 00:09:29.640 "aliases": [ 00:09:29.640 "73a287c2-ba81-427b-a6b3-034ef790b63c" 00:09:29.640 ], 00:09:29.640 "assigned_rate_limits": { 00:09:29.640 "r_mbytes_per_sec": 0, 00:09:29.640 "rw_ios_per_sec": 0, 00:09:29.640 "rw_mbytes_per_sec": 0, 00:09:29.640 "w_mbytes_per_sec": 0 00:09:29.640 }, 00:09:29.640 "block_size": 4096, 00:09:29.640 "claimed": false, 00:09:29.640 "driver_specific": { 00:09:29.640 "mp_policy": "active_passive", 00:09:29.640 "nvme": [ 00:09:29.640 { 00:09:29.640 "ctrlr_data": { 00:09:29.640 "ana_reporting": false, 00:09:29.640 "cntlid": 1, 00:09:29.640 "firmware_revision": "24.09", 00:09:29.640 "model_number": "SPDK bdev Controller", 00:09:29.640 "multi_ctrlr": true, 00:09:29.640 "oacs": { 00:09:29.640 "firmware": 0, 00:09:29.640 "format": 0, 00:09:29.640 "ns_manage": 0, 00:09:29.640 "security": 0 00:09:29.640 }, 00:09:29.640 "serial_number": "SPDK0", 00:09:29.640 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:09:29.640 "vendor_id": "0x8086" 00:09:29.640 }, 00:09:29.640 "ns_data": { 00:09:29.640 "can_share": true, 00:09:29.640 "id": 1 00:09:29.640 }, 00:09:29.640 "trid": { 00:09:29.640 "adrfam": "IPv4", 00:09:29.640 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:09:29.640 "traddr": "10.0.0.2", 00:09:29.640 "trsvcid": "4420", 00:09:29.640 "trtype": "TCP" 00:09:29.640 }, 00:09:29.640 "vs": { 00:09:29.640 "nvme_version": "1.3" 00:09:29.640 } 00:09:29.640 } 00:09:29.640 ] 00:09:29.640 }, 00:09:29.640 "memory_domains": [ 00:09:29.640 { 00:09:29.640 "dma_device_id": "system", 00:09:29.640 "dma_device_type": 1 00:09:29.640 } 00:09:29.640 ], 00:09:29.640 "name": "Nvme0n1", 00:09:29.640 "num_blocks": 38912, 00:09:29.640 "product_name": "NVMe disk", 00:09:29.640 "supported_io_types": { 00:09:29.640 "abort": true, 00:09:29.640 "compare": true, 00:09:29.640 "compare_and_write": true, 00:09:29.640 "copy": true, 00:09:29.640 "flush": true, 00:09:29.640 "get_zone_info": false, 00:09:29.640 "nvme_admin": true, 00:09:29.640 "nvme_io": true, 00:09:29.640 "nvme_io_md": false, 00:09:29.640 "nvme_iov_md": false, 00:09:29.640 "read": true, 00:09:29.640 "reset": true, 00:09:29.640 "seek_data": false, 00:09:29.640 "seek_hole": false, 00:09:29.640 "unmap": true, 00:09:29.640 "write": true, 00:09:29.640 "write_zeroes": true, 00:09:29.640 "zcopy": false, 00:09:29.640 "zone_append": false, 00:09:29.640 "zone_management": false 00:09:29.640 }, 00:09:29.640 "uuid": "73a287c2-ba81-427b-a6b3-034ef790b63c", 00:09:29.640 "zoned": false 00:09:29.640 } 00:09:29.640 ] 00:09:29.640 11:28:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:09:29.640 11:28:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=73821 00:09:29.640 11:28:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:09:29.900 Running I/O for 10 seconds... 00:09:30.836 Latency(us) 00:09:30.836 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:30.836 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:30.836 Nvme0n1 : 1.00 8397.00 32.80 0.00 0.00 0.00 0.00 0.00 00:09:30.836 =================================================================================================================== 00:09:30.836 Total : 8397.00 32.80 0.00 0.00 0.00 0.00 0.00 00:09:30.836 00:09:31.772 11:28:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 4bf652e3-09bd-40a0-aadd-375da32b5554 00:09:31.772 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:31.772 Nvme0n1 : 2.00 8198.00 32.02 0.00 0.00 0.00 0.00 0.00 00:09:31.772 =================================================================================================================== 00:09:31.772 Total : 8198.00 32.02 0.00 0.00 0.00 0.00 0.00 00:09:31.772 00:09:32.031 true 00:09:32.031 11:28:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4bf652e3-09bd-40a0-aadd-375da32b5554 00:09:32.031 11:28:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:09:32.289 11:28:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:09:32.289 11:28:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:09:32.289 11:28:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 73821 00:09:32.856 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:32.856 Nvme0n1 : 3.00 8247.67 32.22 0.00 0.00 0.00 0.00 0.00 00:09:32.856 =================================================================================================================== 00:09:32.856 Total : 8247.67 32.22 0.00 0.00 0.00 0.00 0.00 00:09:32.856 00:09:33.792 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:33.792 Nvme0n1 : 4.00 8248.00 32.22 0.00 0.00 0.00 0.00 0.00 00:09:33.792 =================================================================================================================== 00:09:33.792 Total : 8248.00 32.22 0.00 0.00 0.00 0.00 0.00 00:09:33.792 00:09:34.727 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:34.727 Nvme0n1 : 5.00 8244.60 32.21 0.00 0.00 0.00 0.00 0.00 00:09:34.727 =================================================================================================================== 00:09:34.727 Total : 8244.60 32.21 0.00 0.00 0.00 0.00 0.00 00:09:34.727 00:09:36.114 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:36.114 Nvme0n1 : 6.00 8203.33 32.04 0.00 0.00 0.00 0.00 0.00 00:09:36.114 =================================================================================================================== 00:09:36.114 Total : 8203.33 32.04 0.00 0.00 0.00 0.00 0.00 00:09:36.114 00:09:37.045 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:37.045 Nvme0n1 : 7.00 8189.86 31.99 0.00 0.00 0.00 0.00 0.00 00:09:37.045 =================================================================================================================== 00:09:37.045 Total : 8189.86 31.99 0.00 0.00 0.00 0.00 0.00 00:09:37.045 00:09:37.978 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:37.978 Nvme0n1 : 8.00 8180.62 31.96 0.00 0.00 0.00 0.00 0.00 00:09:37.978 =================================================================================================================== 00:09:37.978 Total : 8180.62 31.96 0.00 0.00 0.00 0.00 0.00 00:09:37.978 00:09:38.914 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:38.914 Nvme0n1 : 9.00 8161.56 31.88 0.00 0.00 0.00 0.00 0.00 00:09:38.914 =================================================================================================================== 00:09:38.914 Total : 8161.56 31.88 0.00 0.00 0.00 0.00 0.00 00:09:38.914 00:09:39.848 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:39.848 Nvme0n1 : 10.00 8151.40 31.84 0.00 0.00 0.00 0.00 0.00 00:09:39.848 =================================================================================================================== 00:09:39.848 Total : 8151.40 31.84 0.00 0.00 0.00 0.00 0.00 00:09:39.848 00:09:39.848 00:09:39.848 Latency(us) 00:09:39.848 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:39.848 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:39.848 Nvme0n1 : 10.01 8156.99 31.86 0.00 0.00 15687.29 6732.33 40036.54 00:09:39.848 =================================================================================================================== 00:09:39.848 Total : 8156.99 31.86 0.00 0.00 15687.29 6732.33 40036.54 00:09:39.848 0 00:09:39.848 11:28:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 73774 00:09:39.848 11:28:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@948 -- # '[' -z 73774 ']' 00:09:39.848 11:28:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@952 -- # kill -0 73774 00:09:39.848 11:28:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@953 -- # uname 00:09:39.848 11:28:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:39.848 11:28:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 73774 00:09:39.848 11:28:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:09:39.848 11:28:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:09:39.848 killing process with pid 73774 00:09:39.848 11:28:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 73774' 00:09:39.848 11:28:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@967 -- # kill 73774 00:09:39.848 Received shutdown signal, test time was about 10.000000 seconds 00:09:39.848 00:09:39.848 Latency(us) 00:09:39.848 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:39.848 =================================================================================================================== 00:09:39.848 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:09:39.848 11:28:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # wait 73774 00:09:40.107 11:28:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:40.365 11:28:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:09:40.622 11:28:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:09:40.622 11:28:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4bf652e3-09bd-40a0-aadd-375da32b5554 00:09:40.880 11:28:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:09:40.880 11:28:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:09:40.880 11:28:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:09:41.138 [2024-07-15 11:28:18.416271] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:09:41.138 11:28:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4bf652e3-09bd-40a0-aadd-375da32b5554 00:09:41.138 11:28:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@648 -- # local es=0 00:09:41.138 11:28:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4bf652e3-09bd-40a0-aadd-375da32b5554 00:09:41.138 11:28:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:41.138 11:28:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:41.138 11:28:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:41.138 11:28:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:41.138 11:28:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:41.138 11:28:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:41.138 11:28:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:41.138 11:28:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:09:41.138 11:28:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4bf652e3-09bd-40a0-aadd-375da32b5554 00:09:41.396 2024/07/15 11:28:18 error on JSON-RPC call, method: bdev_lvol_get_lvstores, params: map[uuid:4bf652e3-09bd-40a0-aadd-375da32b5554], err: error received for bdev_lvol_get_lvstores method, err: Code=-19 Msg=No such device 00:09:41.396 request: 00:09:41.396 { 00:09:41.396 "method": "bdev_lvol_get_lvstores", 00:09:41.396 "params": { 00:09:41.396 "uuid": "4bf652e3-09bd-40a0-aadd-375da32b5554" 00:09:41.396 } 00:09:41.396 } 00:09:41.396 Got JSON-RPC error response 00:09:41.396 GoRPCClient: error on JSON-RPC call 00:09:41.396 11:28:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@651 -- # es=1 00:09:41.396 11:28:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:09:41.396 11:28:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:09:41.396 11:28:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:09:41.396 11:28:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:41.653 aio_bdev 00:09:41.653 11:28:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 73a287c2-ba81-427b-a6b3-034ef790b63c 00:09:41.653 11:28:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@897 -- # local bdev_name=73a287c2-ba81-427b-a6b3-034ef790b63c 00:09:41.653 11:28:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:09:41.653 11:28:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@899 -- # local i 00:09:41.653 11:28:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:09:41.653 11:28:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:09:41.653 11:28:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:09:41.911 11:28:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 73a287c2-ba81-427b-a6b3-034ef790b63c -t 2000 00:09:42.233 [ 00:09:42.233 { 00:09:42.233 "aliases": [ 00:09:42.233 "lvs/lvol" 00:09:42.233 ], 00:09:42.233 "assigned_rate_limits": { 00:09:42.233 "r_mbytes_per_sec": 0, 00:09:42.233 "rw_ios_per_sec": 0, 00:09:42.233 "rw_mbytes_per_sec": 0, 00:09:42.233 "w_mbytes_per_sec": 0 00:09:42.233 }, 00:09:42.233 "block_size": 4096, 00:09:42.233 "claimed": false, 00:09:42.233 "driver_specific": { 00:09:42.233 "lvol": { 00:09:42.233 "base_bdev": "aio_bdev", 00:09:42.233 "clone": false, 00:09:42.233 "esnap_clone": false, 00:09:42.233 "lvol_store_uuid": "4bf652e3-09bd-40a0-aadd-375da32b5554", 00:09:42.233 "num_allocated_clusters": 38, 00:09:42.233 "snapshot": false, 00:09:42.233 "thin_provision": false 00:09:42.233 } 00:09:42.233 }, 00:09:42.233 "name": "73a287c2-ba81-427b-a6b3-034ef790b63c", 00:09:42.233 "num_blocks": 38912, 00:09:42.233 "product_name": "Logical Volume", 00:09:42.233 "supported_io_types": { 00:09:42.233 "abort": false, 00:09:42.233 "compare": false, 00:09:42.233 "compare_and_write": false, 00:09:42.233 "copy": false, 00:09:42.233 "flush": false, 00:09:42.233 "get_zone_info": false, 00:09:42.233 "nvme_admin": false, 00:09:42.233 "nvme_io": false, 00:09:42.233 "nvme_io_md": false, 00:09:42.233 "nvme_iov_md": false, 00:09:42.233 "read": true, 00:09:42.233 "reset": true, 00:09:42.233 "seek_data": true, 00:09:42.233 "seek_hole": true, 00:09:42.233 "unmap": true, 00:09:42.233 "write": true, 00:09:42.233 "write_zeroes": true, 00:09:42.233 "zcopy": false, 00:09:42.233 "zone_append": false, 00:09:42.233 "zone_management": false 00:09:42.233 }, 00:09:42.233 "uuid": "73a287c2-ba81-427b-a6b3-034ef790b63c", 00:09:42.233 "zoned": false 00:09:42.233 } 00:09:42.233 ] 00:09:42.233 11:28:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # return 0 00:09:42.233 11:28:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4bf652e3-09bd-40a0-aadd-375da32b5554 00:09:42.233 11:28:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:09:42.490 11:28:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:09:42.490 11:28:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4bf652e3-09bd-40a0-aadd-375da32b5554 00:09:42.490 11:28:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:09:42.747 11:28:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:09:42.747 11:28:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 73a287c2-ba81-427b-a6b3-034ef790b63c 00:09:43.005 11:28:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 4bf652e3-09bd-40a0-aadd-375da32b5554 00:09:43.264 11:28:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:09:43.521 11:28:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:09:43.779 ************************************ 00:09:43.779 END TEST lvs_grow_clean 00:09:43.779 ************************************ 00:09:43.779 00:09:43.779 real 0m18.486s 00:09:43.779 user 0m17.948s 00:09:43.779 sys 0m2.043s 00:09:43.779 11:28:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:43.779 11:28:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:09:43.779 11:28:21 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1142 -- # return 0 00:09:43.779 11:28:21 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:09:43.779 11:28:21 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:09:43.779 11:28:21 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:43.779 11:28:21 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:43.779 ************************************ 00:09:43.779 START TEST lvs_grow_dirty 00:09:43.779 ************************************ 00:09:43.779 11:28:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1123 -- # lvs_grow dirty 00:09:43.779 11:28:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:09:43.779 11:28:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:09:43.779 11:28:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:09:43.779 11:28:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:09:43.779 11:28:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:09:43.779 11:28:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:09:43.779 11:28:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:09:43.779 11:28:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:09:43.779 11:28:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:44.037 11:28:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:09:44.037 11:28:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:09:44.295 11:28:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=6b90b93d-cd8a-43e6-9fe5-4f35e8a750bf 00:09:44.295 11:28:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6b90b93d-cd8a-43e6-9fe5-4f35e8a750bf 00:09:44.295 11:28:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:09:44.553 11:28:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:09:44.553 11:28:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:09:44.553 11:28:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 6b90b93d-cd8a-43e6-9fe5-4f35e8a750bf lvol 150 00:09:44.812 11:28:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=fb3f3b27-57b1-4c94-b4e7-57a5d19b09ad 00:09:44.812 11:28:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:09:44.812 11:28:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:09:45.071 [2024-07-15 11:28:22.514422] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:09:45.071 [2024-07-15 11:28:22.514503] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:09:45.071 true 00:09:45.071 11:28:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:09:45.071 11:28:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6b90b93d-cd8a-43e6-9fe5-4f35e8a750bf 00:09:45.636 11:28:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:09:45.636 11:28:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:09:45.636 11:28:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 fb3f3b27-57b1-4c94-b4e7-57a5d19b09ad 00:09:45.905 11:28:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:09:46.166 [2024-07-15 11:28:23.546976] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:46.166 11:28:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:46.424 11:28:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:09:46.424 11:28:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=74220 00:09:46.424 11:28:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:09:46.425 11:28:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 74220 /var/tmp/bdevperf.sock 00:09:46.425 11:28:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@829 -- # '[' -z 74220 ']' 00:09:46.425 11:28:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:09:46.425 11:28:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:46.425 11:28:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:09:46.425 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:09:46.425 11:28:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:46.425 11:28:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:46.425 [2024-07-15 11:28:23.895768] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:09:46.425 [2024-07-15 11:28:23.895870] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74220 ] 00:09:46.683 [2024-07-15 11:28:24.030338] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:46.683 [2024-07-15 11:28:24.095375] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:46.941 11:28:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:46.941 11:28:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@862 -- # return 0 00:09:46.941 11:28:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:09:47.200 Nvme0n1 00:09:47.200 11:28:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:09:47.469 [ 00:09:47.469 { 00:09:47.469 "aliases": [ 00:09:47.469 "fb3f3b27-57b1-4c94-b4e7-57a5d19b09ad" 00:09:47.469 ], 00:09:47.469 "assigned_rate_limits": { 00:09:47.469 "r_mbytes_per_sec": 0, 00:09:47.469 "rw_ios_per_sec": 0, 00:09:47.469 "rw_mbytes_per_sec": 0, 00:09:47.469 "w_mbytes_per_sec": 0 00:09:47.469 }, 00:09:47.469 "block_size": 4096, 00:09:47.469 "claimed": false, 00:09:47.469 "driver_specific": { 00:09:47.469 "mp_policy": "active_passive", 00:09:47.469 "nvme": [ 00:09:47.469 { 00:09:47.469 "ctrlr_data": { 00:09:47.469 "ana_reporting": false, 00:09:47.469 "cntlid": 1, 00:09:47.469 "firmware_revision": "24.09", 00:09:47.469 "model_number": "SPDK bdev Controller", 00:09:47.469 "multi_ctrlr": true, 00:09:47.469 "oacs": { 00:09:47.469 "firmware": 0, 00:09:47.469 "format": 0, 00:09:47.469 "ns_manage": 0, 00:09:47.469 "security": 0 00:09:47.469 }, 00:09:47.469 "serial_number": "SPDK0", 00:09:47.469 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:09:47.469 "vendor_id": "0x8086" 00:09:47.469 }, 00:09:47.469 "ns_data": { 00:09:47.469 "can_share": true, 00:09:47.469 "id": 1 00:09:47.469 }, 00:09:47.469 "trid": { 00:09:47.469 "adrfam": "IPv4", 00:09:47.469 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:09:47.469 "traddr": "10.0.0.2", 00:09:47.469 "trsvcid": "4420", 00:09:47.469 "trtype": "TCP" 00:09:47.469 }, 00:09:47.469 "vs": { 00:09:47.469 "nvme_version": "1.3" 00:09:47.469 } 00:09:47.469 } 00:09:47.469 ] 00:09:47.469 }, 00:09:47.469 "memory_domains": [ 00:09:47.469 { 00:09:47.469 "dma_device_id": "system", 00:09:47.469 "dma_device_type": 1 00:09:47.469 } 00:09:47.469 ], 00:09:47.469 "name": "Nvme0n1", 00:09:47.469 "num_blocks": 38912, 00:09:47.469 "product_name": "NVMe disk", 00:09:47.469 "supported_io_types": { 00:09:47.469 "abort": true, 00:09:47.469 "compare": true, 00:09:47.469 "compare_and_write": true, 00:09:47.469 "copy": true, 00:09:47.469 "flush": true, 00:09:47.469 "get_zone_info": false, 00:09:47.469 "nvme_admin": true, 00:09:47.469 "nvme_io": true, 00:09:47.469 "nvme_io_md": false, 00:09:47.469 "nvme_iov_md": false, 00:09:47.469 "read": true, 00:09:47.469 "reset": true, 00:09:47.469 "seek_data": false, 00:09:47.469 "seek_hole": false, 00:09:47.469 "unmap": true, 00:09:47.469 "write": true, 00:09:47.469 "write_zeroes": true, 00:09:47.469 "zcopy": false, 00:09:47.469 "zone_append": false, 00:09:47.469 "zone_management": false 00:09:47.469 }, 00:09:47.469 "uuid": "fb3f3b27-57b1-4c94-b4e7-57a5d19b09ad", 00:09:47.469 "zoned": false 00:09:47.469 } 00:09:47.469 ] 00:09:47.469 11:28:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:09:47.469 11:28:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=74254 00:09:47.470 11:28:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:09:47.470 Running I/O for 10 seconds... 00:09:48.416 Latency(us) 00:09:48.416 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:48.417 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:48.417 Nvme0n1 : 1.00 8436.00 32.95 0.00 0.00 0.00 0.00 0.00 00:09:48.417 =================================================================================================================== 00:09:48.417 Total : 8436.00 32.95 0.00 0.00 0.00 0.00 0.00 00:09:48.417 00:09:49.353 11:28:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 6b90b93d-cd8a-43e6-9fe5-4f35e8a750bf 00:09:49.611 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:49.611 Nvme0n1 : 2.00 8427.50 32.92 0.00 0.00 0.00 0.00 0.00 00:09:49.611 =================================================================================================================== 00:09:49.611 Total : 8427.50 32.92 0.00 0.00 0.00 0.00 0.00 00:09:49.611 00:09:49.611 true 00:09:49.611 11:28:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:09:49.611 11:28:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6b90b93d-cd8a-43e6-9fe5-4f35e8a750bf 00:09:50.178 11:28:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:09:50.178 11:28:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:09:50.178 11:28:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 74254 00:09:50.436 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:50.436 Nvme0n1 : 3.00 8343.00 32.59 0.00 0.00 0.00 0.00 0.00 00:09:50.436 =================================================================================================================== 00:09:50.436 Total : 8343.00 32.59 0.00 0.00 0.00 0.00 0.00 00:09:50.436 00:09:51.371 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:51.371 Nvme0n1 : 4.00 8272.50 32.31 0.00 0.00 0.00 0.00 0.00 00:09:51.371 =================================================================================================================== 00:09:51.371 Total : 8272.50 32.31 0.00 0.00 0.00 0.00 0.00 00:09:51.371 00:09:52.748 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:52.748 Nvme0n1 : 5.00 8258.20 32.26 0.00 0.00 0.00 0.00 0.00 00:09:52.748 =================================================================================================================== 00:09:52.748 Total : 8258.20 32.26 0.00 0.00 0.00 0.00 0.00 00:09:52.748 00:09:53.684 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:53.684 Nvme0n1 : 6.00 8179.83 31.95 0.00 0.00 0.00 0.00 0.00 00:09:53.684 =================================================================================================================== 00:09:53.684 Total : 8179.83 31.95 0.00 0.00 0.00 0.00 0.00 00:09:53.684 00:09:54.620 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:54.620 Nvme0n1 : 7.00 8060.29 31.49 0.00 0.00 0.00 0.00 0.00 00:09:54.620 =================================================================================================================== 00:09:54.620 Total : 8060.29 31.49 0.00 0.00 0.00 0.00 0.00 00:09:54.620 00:09:55.553 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:55.553 Nvme0n1 : 8.00 7793.50 30.44 0.00 0.00 0.00 0.00 0.00 00:09:55.553 =================================================================================================================== 00:09:55.553 Total : 7793.50 30.44 0.00 0.00 0.00 0.00 0.00 00:09:55.553 00:09:56.487 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:56.487 Nvme0n1 : 9.00 7774.67 30.37 0.00 0.00 0.00 0.00 0.00 00:09:56.487 =================================================================================================================== 00:09:56.487 Total : 7774.67 30.37 0.00 0.00 0.00 0.00 0.00 00:09:56.487 00:09:57.422 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:57.422 Nvme0n1 : 10.00 7759.30 30.31 0.00 0.00 0.00 0.00 0.00 00:09:57.422 =================================================================================================================== 00:09:57.422 Total : 7759.30 30.31 0.00 0.00 0.00 0.00 0.00 00:09:57.422 00:09:57.422 00:09:57.422 Latency(us) 00:09:57.422 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:57.422 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:57.422 Nvme0n1 : 10.01 7762.24 30.32 0.00 0.00 16484.37 2532.07 346983.33 00:09:57.422 =================================================================================================================== 00:09:57.422 Total : 7762.24 30.32 0.00 0.00 16484.37 2532.07 346983.33 00:09:57.422 0 00:09:57.422 11:28:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 74220 00:09:57.422 11:28:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@948 -- # '[' -z 74220 ']' 00:09:57.422 11:28:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@952 -- # kill -0 74220 00:09:57.422 11:28:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@953 -- # uname 00:09:57.422 11:28:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:57.422 11:28:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 74220 00:09:57.422 killing process with pid 74220 00:09:57.422 Received shutdown signal, test time was about 10.000000 seconds 00:09:57.422 00:09:57.422 Latency(us) 00:09:57.422 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:57.422 =================================================================================================================== 00:09:57.422 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:09:57.422 11:28:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:09:57.422 11:28:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:09:57.422 11:28:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@966 -- # echo 'killing process with pid 74220' 00:09:57.422 11:28:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@967 -- # kill 74220 00:09:57.422 11:28:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # wait 74220 00:09:57.679 11:28:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:57.937 11:28:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:09:58.503 11:28:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6b90b93d-cd8a-43e6-9fe5-4f35e8a750bf 00:09:58.503 11:28:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:09:58.503 11:28:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:09:58.503 11:28:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:09:58.503 11:28:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 73612 00:09:58.503 11:28:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 73612 00:09:58.761 /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 73612 Killed "${NVMF_APP[@]}" "$@" 00:09:58.761 11:28:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:09:58.761 11:28:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:09:58.761 11:28:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:58.761 11:28:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:58.761 11:28:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:58.761 11:28:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@481 -- # nvmfpid=74417 00:09:58.761 11:28:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:09:58.761 11:28:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@482 -- # waitforlisten 74417 00:09:58.761 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:58.761 11:28:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@829 -- # '[' -z 74417 ']' 00:09:58.761 11:28:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:58.761 11:28:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:58.761 11:28:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:58.761 11:28:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:58.761 11:28:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:58.761 [2024-07-15 11:28:36.083494] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:09:58.761 [2024-07-15 11:28:36.083672] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:59.019 [2024-07-15 11:28:36.237171] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:59.019 [2024-07-15 11:28:36.305065] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:59.019 [2024-07-15 11:28:36.305133] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:59.019 [2024-07-15 11:28:36.305154] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:59.019 [2024-07-15 11:28:36.305166] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:59.019 [2024-07-15 11:28:36.305175] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:59.019 [2024-07-15 11:28:36.305211] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:59.585 11:28:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:59.585 11:28:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@862 -- # return 0 00:09:59.585 11:28:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:59.585 11:28:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:59.585 11:28:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:59.843 11:28:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:59.843 11:28:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:10:00.100 [2024-07-15 11:28:37.367803] blobstore.c:4865:bs_recover: *NOTICE*: Performing recovery on blobstore 00:10:00.100 [2024-07-15 11:28:37.368189] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:10:00.100 [2024-07-15 11:28:37.368487] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:10:00.100 11:28:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:10:00.100 11:28:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev fb3f3b27-57b1-4c94-b4e7-57a5d19b09ad 00:10:00.100 11:28:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@897 -- # local bdev_name=fb3f3b27-57b1-4c94-b4e7-57a5d19b09ad 00:10:00.100 11:28:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:10:00.100 11:28:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local i 00:10:00.100 11:28:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:10:00.100 11:28:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:10:00.100 11:28:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:10:00.358 11:28:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b fb3f3b27-57b1-4c94-b4e7-57a5d19b09ad -t 2000 00:10:00.617 [ 00:10:00.617 { 00:10:00.617 "aliases": [ 00:10:00.617 "lvs/lvol" 00:10:00.617 ], 00:10:00.617 "assigned_rate_limits": { 00:10:00.617 "r_mbytes_per_sec": 0, 00:10:00.617 "rw_ios_per_sec": 0, 00:10:00.617 "rw_mbytes_per_sec": 0, 00:10:00.617 "w_mbytes_per_sec": 0 00:10:00.617 }, 00:10:00.617 "block_size": 4096, 00:10:00.617 "claimed": false, 00:10:00.617 "driver_specific": { 00:10:00.617 "lvol": { 00:10:00.617 "base_bdev": "aio_bdev", 00:10:00.617 "clone": false, 00:10:00.617 "esnap_clone": false, 00:10:00.617 "lvol_store_uuid": "6b90b93d-cd8a-43e6-9fe5-4f35e8a750bf", 00:10:00.617 "num_allocated_clusters": 38, 00:10:00.617 "snapshot": false, 00:10:00.617 "thin_provision": false 00:10:00.617 } 00:10:00.617 }, 00:10:00.617 "name": "fb3f3b27-57b1-4c94-b4e7-57a5d19b09ad", 00:10:00.617 "num_blocks": 38912, 00:10:00.617 "product_name": "Logical Volume", 00:10:00.617 "supported_io_types": { 00:10:00.617 "abort": false, 00:10:00.617 "compare": false, 00:10:00.617 "compare_and_write": false, 00:10:00.617 "copy": false, 00:10:00.617 "flush": false, 00:10:00.617 "get_zone_info": false, 00:10:00.617 "nvme_admin": false, 00:10:00.617 "nvme_io": false, 00:10:00.617 "nvme_io_md": false, 00:10:00.617 "nvme_iov_md": false, 00:10:00.617 "read": true, 00:10:00.617 "reset": true, 00:10:00.617 "seek_data": true, 00:10:00.617 "seek_hole": true, 00:10:00.617 "unmap": true, 00:10:00.617 "write": true, 00:10:00.617 "write_zeroes": true, 00:10:00.617 "zcopy": false, 00:10:00.617 "zone_append": false, 00:10:00.617 "zone_management": false 00:10:00.617 }, 00:10:00.617 "uuid": "fb3f3b27-57b1-4c94-b4e7-57a5d19b09ad", 00:10:00.617 "zoned": false 00:10:00.617 } 00:10:00.617 ] 00:10:00.617 11:28:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # return 0 00:10:00.617 11:28:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:10:00.617 11:28:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6b90b93d-cd8a-43e6-9fe5-4f35e8a750bf 00:10:00.875 11:28:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:10:00.875 11:28:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6b90b93d-cd8a-43e6-9fe5-4f35e8a750bf 00:10:00.875 11:28:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:10:01.132 11:28:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:10:01.132 11:28:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:10:01.391 [2024-07-15 11:28:38.781533] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:10:01.391 11:28:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6b90b93d-cd8a-43e6-9fe5-4f35e8a750bf 00:10:01.391 11:28:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@648 -- # local es=0 00:10:01.391 11:28:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6b90b93d-cd8a-43e6-9fe5-4f35e8a750bf 00:10:01.391 11:28:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:01.391 11:28:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:01.391 11:28:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:01.391 11:28:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:01.391 11:28:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:01.391 11:28:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:01.391 11:28:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:01.391 11:28:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:10:01.391 11:28:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6b90b93d-cd8a-43e6-9fe5-4f35e8a750bf 00:10:01.650 2024/07/15 11:28:39 error on JSON-RPC call, method: bdev_lvol_get_lvstores, params: map[uuid:6b90b93d-cd8a-43e6-9fe5-4f35e8a750bf], err: error received for bdev_lvol_get_lvstores method, err: Code=-19 Msg=No such device 00:10:01.650 request: 00:10:01.650 { 00:10:01.650 "method": "bdev_lvol_get_lvstores", 00:10:01.650 "params": { 00:10:01.650 "uuid": "6b90b93d-cd8a-43e6-9fe5-4f35e8a750bf" 00:10:01.650 } 00:10:01.650 } 00:10:01.650 Got JSON-RPC error response 00:10:01.650 GoRPCClient: error on JSON-RPC call 00:10:01.650 11:28:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@651 -- # es=1 00:10:01.650 11:28:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:10:01.650 11:28:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:10:01.650 11:28:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:10:01.650 11:28:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:10:01.909 aio_bdev 00:10:01.909 11:28:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev fb3f3b27-57b1-4c94-b4e7-57a5d19b09ad 00:10:01.909 11:28:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@897 -- # local bdev_name=fb3f3b27-57b1-4c94-b4e7-57a5d19b09ad 00:10:01.909 11:28:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:10:01.909 11:28:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local i 00:10:01.909 11:28:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:10:01.909 11:28:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:10:01.909 11:28:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:10:02.168 11:28:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b fb3f3b27-57b1-4c94-b4e7-57a5d19b09ad -t 2000 00:10:02.427 [ 00:10:02.427 { 00:10:02.427 "aliases": [ 00:10:02.427 "lvs/lvol" 00:10:02.427 ], 00:10:02.427 "assigned_rate_limits": { 00:10:02.427 "r_mbytes_per_sec": 0, 00:10:02.427 "rw_ios_per_sec": 0, 00:10:02.427 "rw_mbytes_per_sec": 0, 00:10:02.427 "w_mbytes_per_sec": 0 00:10:02.427 }, 00:10:02.427 "block_size": 4096, 00:10:02.427 "claimed": false, 00:10:02.427 "driver_specific": { 00:10:02.427 "lvol": { 00:10:02.427 "base_bdev": "aio_bdev", 00:10:02.427 "clone": false, 00:10:02.427 "esnap_clone": false, 00:10:02.427 "lvol_store_uuid": "6b90b93d-cd8a-43e6-9fe5-4f35e8a750bf", 00:10:02.427 "num_allocated_clusters": 38, 00:10:02.427 "snapshot": false, 00:10:02.427 "thin_provision": false 00:10:02.427 } 00:10:02.427 }, 00:10:02.427 "name": "fb3f3b27-57b1-4c94-b4e7-57a5d19b09ad", 00:10:02.427 "num_blocks": 38912, 00:10:02.427 "product_name": "Logical Volume", 00:10:02.427 "supported_io_types": { 00:10:02.427 "abort": false, 00:10:02.427 "compare": false, 00:10:02.427 "compare_and_write": false, 00:10:02.427 "copy": false, 00:10:02.427 "flush": false, 00:10:02.427 "get_zone_info": false, 00:10:02.427 "nvme_admin": false, 00:10:02.427 "nvme_io": false, 00:10:02.427 "nvme_io_md": false, 00:10:02.427 "nvme_iov_md": false, 00:10:02.427 "read": true, 00:10:02.427 "reset": true, 00:10:02.427 "seek_data": true, 00:10:02.427 "seek_hole": true, 00:10:02.427 "unmap": true, 00:10:02.427 "write": true, 00:10:02.427 "write_zeroes": true, 00:10:02.427 "zcopy": false, 00:10:02.427 "zone_append": false, 00:10:02.427 "zone_management": false 00:10:02.427 }, 00:10:02.427 "uuid": "fb3f3b27-57b1-4c94-b4e7-57a5d19b09ad", 00:10:02.427 "zoned": false 00:10:02.427 } 00:10:02.427 ] 00:10:02.427 11:28:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # return 0 00:10:02.427 11:28:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6b90b93d-cd8a-43e6-9fe5-4f35e8a750bf 00:10:02.427 11:28:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:10:02.685 11:28:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:10:02.685 11:28:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6b90b93d-cd8a-43e6-9fe5-4f35e8a750bf 00:10:02.685 11:28:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:10:02.944 11:28:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:10:02.944 11:28:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete fb3f3b27-57b1-4c94-b4e7-57a5d19b09ad 00:10:03.204 11:28:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 6b90b93d-cd8a-43e6-9fe5-4f35e8a750bf 00:10:03.463 11:28:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:10:03.722 11:28:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:10:04.288 00:10:04.288 real 0m20.315s 00:10:04.288 user 0m42.411s 00:10:04.288 sys 0m7.671s 00:10:04.288 11:28:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:04.288 11:28:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:10:04.288 ************************************ 00:10:04.288 END TEST lvs_grow_dirty 00:10:04.288 ************************************ 00:10:04.288 11:28:41 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1142 -- # return 0 00:10:04.288 11:28:41 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:10:04.288 11:28:41 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@806 -- # type=--id 00:10:04.288 11:28:41 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@807 -- # id=0 00:10:04.288 11:28:41 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:10:04.288 11:28:41 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:10:04.288 11:28:41 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:10:04.288 11:28:41 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:10:04.288 11:28:41 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # for n in $shm_files 00:10:04.288 11:28:41 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:10:04.288 nvmf_trace.0 00:10:04.288 11:28:41 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@821 -- # return 0 00:10:04.288 11:28:41 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:10:04.288 11:28:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:04.288 11:28:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@117 -- # sync 00:10:04.545 11:28:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:04.545 11:28:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@120 -- # set +e 00:10:04.545 11:28:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:04.545 11:28:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:04.545 rmmod nvme_tcp 00:10:04.545 rmmod nvme_fabrics 00:10:04.545 rmmod nvme_keyring 00:10:04.545 11:28:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:04.545 11:28:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set -e 00:10:04.545 11:28:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@125 -- # return 0 00:10:04.545 11:28:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@489 -- # '[' -n 74417 ']' 00:10:04.545 11:28:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@490 -- # killprocess 74417 00:10:04.545 11:28:41 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@948 -- # '[' -z 74417 ']' 00:10:04.545 11:28:41 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@952 -- # kill -0 74417 00:10:04.545 11:28:41 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@953 -- # uname 00:10:04.545 11:28:41 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:10:04.545 11:28:41 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 74417 00:10:04.545 11:28:41 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:10:04.545 11:28:41 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:10:04.545 killing process with pid 74417 00:10:04.545 11:28:41 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@966 -- # echo 'killing process with pid 74417' 00:10:04.545 11:28:41 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@967 -- # kill 74417 00:10:04.545 11:28:41 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # wait 74417 00:10:04.545 11:28:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:04.545 11:28:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:04.545 11:28:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:04.545 11:28:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:04.545 11:28:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:04.545 11:28:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:04.545 11:28:42 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:04.545 11:28:42 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:04.803 11:28:42 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:10:04.803 00:10:04.803 real 0m41.152s 00:10:04.803 user 1m6.925s 00:10:04.803 sys 0m10.381s 00:10:04.803 11:28:42 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:04.803 11:28:42 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:10:04.803 ************************************ 00:10:04.803 END TEST nvmf_lvs_grow 00:10:04.803 ************************************ 00:10:04.803 11:28:42 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:10:04.803 11:28:42 nvmf_tcp -- nvmf/nvmf.sh@50 -- # run_test nvmf_bdev_io_wait /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:10:04.803 11:28:42 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:10:04.803 11:28:42 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:04.803 11:28:42 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:04.803 ************************************ 00:10:04.803 START TEST nvmf_bdev_io_wait 00:10:04.803 ************************************ 00:10:04.803 11:28:42 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:10:04.803 * Looking for test storage... 00:10:04.803 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:10:04.803 11:28:42 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:04.803 11:28:42 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:10:04.803 11:28:42 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:04.803 11:28:42 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:04.803 11:28:42 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:04.803 11:28:42 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:04.803 11:28:42 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:04.803 11:28:42 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:04.803 11:28:42 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:04.803 11:28:42 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:04.803 11:28:42 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:04.803 11:28:42 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:04.803 11:28:42 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:891080d4-f96c-4735-b9e2-e3ce9892e421 00:10:04.803 11:28:42 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=891080d4-f96c-4735-b9e2-e3ce9892e421 00:10:04.803 11:28:42 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:04.803 11:28:42 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:04.803 11:28:42 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:10:04.803 11:28:42 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:04.803 11:28:42 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:04.803 11:28:42 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:04.803 11:28:42 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:04.803 11:28:42 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:04.803 11:28:42 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:04.803 11:28:42 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:04.803 11:28:42 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:04.803 11:28:42 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:10:04.803 11:28:42 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:04.803 11:28:42 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@47 -- # : 0 00:10:04.803 11:28:42 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:04.803 11:28:42 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:04.803 11:28:42 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:04.803 11:28:42 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:04.803 11:28:42 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:04.803 11:28:42 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:04.803 11:28:42 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:04.803 11:28:42 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:04.803 11:28:42 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:04.803 11:28:42 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:04.803 11:28:42 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:10:04.803 11:28:42 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:10:04.803 11:28:42 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:04.803 11:28:42 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:04.803 11:28:42 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:04.803 11:28:42 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:04.803 11:28:42 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:04.803 11:28:42 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:04.803 11:28:42 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:04.803 11:28:42 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:10:04.803 11:28:42 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:10:04.803 11:28:42 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:10:04.803 11:28:42 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:10:04.803 11:28:42 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:10:04.803 11:28:42 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@432 -- # nvmf_veth_init 00:10:04.803 11:28:42 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:04.803 11:28:42 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:04.803 11:28:42 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:10:04.803 11:28:42 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:10:04.803 11:28:42 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:10:04.803 11:28:42 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:10:04.803 11:28:42 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:10:04.803 11:28:42 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:04.803 11:28:42 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:10:04.803 11:28:42 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:10:04.803 11:28:42 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:10:04.803 11:28:42 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:10:04.803 11:28:42 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:10:04.803 11:28:42 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:10:04.803 Cannot find device "nvmf_tgt_br" 00:10:04.803 11:28:42 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@155 -- # true 00:10:04.803 11:28:42 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:10:04.803 Cannot find device "nvmf_tgt_br2" 00:10:04.803 11:28:42 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@156 -- # true 00:10:04.803 11:28:42 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:10:04.803 11:28:42 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:10:04.803 Cannot find device "nvmf_tgt_br" 00:10:04.803 11:28:42 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@158 -- # true 00:10:04.803 11:28:42 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:10:04.803 Cannot find device "nvmf_tgt_br2" 00:10:04.803 11:28:42 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@159 -- # true 00:10:04.803 11:28:42 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:10:05.062 11:28:42 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:10:05.062 11:28:42 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:05.062 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:05.062 11:28:42 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@162 -- # true 00:10:05.062 11:28:42 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:05.062 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:05.062 11:28:42 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@163 -- # true 00:10:05.062 11:28:42 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:10:05.062 11:28:42 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:10:05.062 11:28:42 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:10:05.062 11:28:42 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:10:05.062 11:28:42 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:10:05.062 11:28:42 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:10:05.062 11:28:42 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:10:05.062 11:28:42 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:10:05.062 11:28:42 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:10:05.062 11:28:42 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:10:05.062 11:28:42 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:10:05.062 11:28:42 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:10:05.062 11:28:42 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:10:05.062 11:28:42 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:10:05.062 11:28:42 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:10:05.062 11:28:42 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:10:05.062 11:28:42 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:10:05.062 11:28:42 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:10:05.062 11:28:42 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:10:05.062 11:28:42 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:10:05.062 11:28:42 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:10:05.062 11:28:42 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:10:05.062 11:28:42 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:10:05.062 11:28:42 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:10:05.062 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:05.062 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.055 ms 00:10:05.062 00:10:05.062 --- 10.0.0.2 ping statistics --- 00:10:05.062 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:05.062 rtt min/avg/max/mdev = 0.055/0.055/0.055/0.000 ms 00:10:05.062 11:28:42 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:10:05.062 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:10:05.062 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.044 ms 00:10:05.062 00:10:05.062 --- 10.0.0.3 ping statistics --- 00:10:05.062 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:05.062 rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms 00:10:05.062 11:28:42 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:10:05.062 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:05.062 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.027 ms 00:10:05.062 00:10:05.062 --- 10.0.0.1 ping statistics --- 00:10:05.062 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:05.062 rtt min/avg/max/mdev = 0.027/0.027/0.027/0.000 ms 00:10:05.062 11:28:42 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:05.062 11:28:42 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@433 -- # return 0 00:10:05.062 11:28:42 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:05.062 11:28:42 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:05.062 11:28:42 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:10:05.062 11:28:42 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:10:05.062 11:28:42 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:05.062 11:28:42 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:10:05.062 11:28:42 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:10:05.319 11:28:42 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:10:05.319 11:28:42 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:05.319 11:28:42 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@722 -- # xtrace_disable 00:10:05.319 11:28:42 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:05.319 11:28:42 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@481 -- # nvmfpid=74828 00:10:05.319 11:28:42 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # waitforlisten 74828 00:10:05.319 11:28:42 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:10:05.319 11:28:42 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@829 -- # '[' -z 74828 ']' 00:10:05.319 11:28:42 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:05.319 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:05.319 11:28:42 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:05.319 11:28:42 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:05.319 11:28:42 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:05.319 11:28:42 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:05.319 [2024-07-15 11:28:42.607935] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:10:05.319 [2024-07-15 11:28:42.608041] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:05.319 [2024-07-15 11:28:42.750111] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:05.645 [2024-07-15 11:28:42.811423] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:05.645 [2024-07-15 11:28:42.811469] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:05.645 [2024-07-15 11:28:42.811480] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:05.645 [2024-07-15 11:28:42.811489] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:05.645 [2024-07-15 11:28:42.811496] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:05.645 [2024-07-15 11:28:42.811637] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:05.645 [2024-07-15 11:28:42.812588] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:10:05.645 [2024-07-15 11:28:42.812885] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:10:05.645 [2024-07-15 11:28:42.812900] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:05.645 11:28:42 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:05.645 11:28:42 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@862 -- # return 0 00:10:05.645 11:28:42 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:05.645 11:28:42 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@728 -- # xtrace_disable 00:10:05.645 11:28:42 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:05.645 11:28:42 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:05.645 11:28:42 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:10:05.645 11:28:42 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:05.645 11:28:42 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:05.645 11:28:42 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:05.645 11:28:42 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:10:05.645 11:28:42 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:05.645 11:28:42 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:05.645 11:28:42 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:05.645 11:28:42 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:05.645 11:28:42 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:05.645 11:28:42 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:05.645 [2024-07-15 11:28:42.964395] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:05.645 11:28:42 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:05.645 11:28:42 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:10:05.645 11:28:42 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:05.645 11:28:42 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:05.645 Malloc0 00:10:05.645 11:28:42 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:05.645 11:28:42 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:10:05.645 11:28:42 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:05.645 11:28:42 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:05.645 11:28:42 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:05.645 11:28:42 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:05.645 11:28:42 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:05.645 11:28:42 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:05.645 11:28:43 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:05.645 11:28:43 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:05.645 11:28:43 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:05.645 11:28:43 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:05.645 [2024-07-15 11:28:43.008037] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:05.645 11:28:43 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:05.645 11:28:43 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=74873 00:10:05.645 11:28:43 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:10:05.645 11:28:43 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:10:05.645 11:28:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:10:05.645 11:28:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:10:05.645 11:28:43 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=74875 00:10:05.645 11:28:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:10:05.645 11:28:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:10:05.645 { 00:10:05.645 "params": { 00:10:05.645 "name": "Nvme$subsystem", 00:10:05.645 "trtype": "$TEST_TRANSPORT", 00:10:05.645 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:05.645 "adrfam": "ipv4", 00:10:05.645 "trsvcid": "$NVMF_PORT", 00:10:05.645 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:05.645 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:05.645 "hdgst": ${hdgst:-false}, 00:10:05.645 "ddgst": ${ddgst:-false} 00:10:05.645 }, 00:10:05.646 "method": "bdev_nvme_attach_controller" 00:10:05.646 } 00:10:05.646 EOF 00:10:05.646 )") 00:10:05.646 11:28:43 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:10:05.646 11:28:43 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:10:05.646 11:28:43 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=74877 00:10:05.646 11:28:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:10:05.646 11:28:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:10:05.646 11:28:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:10:05.646 11:28:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:10:05.646 { 00:10:05.646 "params": { 00:10:05.646 "name": "Nvme$subsystem", 00:10:05.646 "trtype": "$TEST_TRANSPORT", 00:10:05.646 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:05.646 "adrfam": "ipv4", 00:10:05.646 "trsvcid": "$NVMF_PORT", 00:10:05.646 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:05.646 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:05.646 "hdgst": ${hdgst:-false}, 00:10:05.646 "ddgst": ${ddgst:-false} 00:10:05.646 }, 00:10:05.646 "method": "bdev_nvme_attach_controller" 00:10:05.646 } 00:10:05.646 EOF 00:10:05.646 )") 00:10:05.646 11:28:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:10:05.646 11:28:43 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=74879 00:10:05.646 11:28:43 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:10:05.646 11:28:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:10:05.646 11:28:43 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:10:05.646 11:28:43 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:10:05.646 11:28:43 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:10:05.646 11:28:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:10:05.646 11:28:43 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:10:05.646 11:28:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:10:05.646 11:28:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:10:05.646 11:28:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:10:05.646 { 00:10:05.646 "params": { 00:10:05.646 "name": "Nvme$subsystem", 00:10:05.646 "trtype": "$TEST_TRANSPORT", 00:10:05.646 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:05.646 "adrfam": "ipv4", 00:10:05.646 "trsvcid": "$NVMF_PORT", 00:10:05.646 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:05.646 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:05.646 "hdgst": ${hdgst:-false}, 00:10:05.646 "ddgst": ${ddgst:-false} 00:10:05.646 }, 00:10:05.646 "method": "bdev_nvme_attach_controller" 00:10:05.646 } 00:10:05.646 EOF 00:10:05.646 )") 00:10:05.646 11:28:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:10:05.646 11:28:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:10:05.646 11:28:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:10:05.646 11:28:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:10:05.646 { 00:10:05.646 "params": { 00:10:05.646 "name": "Nvme$subsystem", 00:10:05.646 "trtype": "$TEST_TRANSPORT", 00:10:05.646 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:05.646 "adrfam": "ipv4", 00:10:05.646 "trsvcid": "$NVMF_PORT", 00:10:05.646 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:05.646 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:05.646 "hdgst": ${hdgst:-false}, 00:10:05.646 "ddgst": ${ddgst:-false} 00:10:05.646 }, 00:10:05.646 "method": "bdev_nvme_attach_controller" 00:10:05.646 } 00:10:05.646 EOF 00:10:05.646 )") 00:10:05.646 11:28:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:10:05.646 11:28:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:10:05.646 11:28:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:10:05.646 11:28:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:10:05.646 11:28:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:10:05.646 11:28:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:10:05.646 11:28:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:10:05.646 "params": { 00:10:05.646 "name": "Nvme1", 00:10:05.646 "trtype": "tcp", 00:10:05.646 "traddr": "10.0.0.2", 00:10:05.646 "adrfam": "ipv4", 00:10:05.646 "trsvcid": "4420", 00:10:05.646 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:05.646 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:05.646 "hdgst": false, 00:10:05.646 "ddgst": false 00:10:05.646 }, 00:10:05.646 "method": "bdev_nvme_attach_controller" 00:10:05.646 }' 00:10:05.646 11:28:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:10:05.646 11:28:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:10:05.646 "params": { 00:10:05.646 "name": "Nvme1", 00:10:05.646 "trtype": "tcp", 00:10:05.646 "traddr": "10.0.0.2", 00:10:05.646 "adrfam": "ipv4", 00:10:05.646 "trsvcid": "4420", 00:10:05.646 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:05.646 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:05.646 "hdgst": false, 00:10:05.646 "ddgst": false 00:10:05.646 }, 00:10:05.646 "method": "bdev_nvme_attach_controller" 00:10:05.646 }' 00:10:05.646 11:28:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:10:05.646 11:28:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:10:05.646 11:28:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:10:05.646 "params": { 00:10:05.646 "name": "Nvme1", 00:10:05.646 "trtype": "tcp", 00:10:05.646 "traddr": "10.0.0.2", 00:10:05.646 "adrfam": "ipv4", 00:10:05.646 "trsvcid": "4420", 00:10:05.646 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:05.646 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:05.646 "hdgst": false, 00:10:05.646 "ddgst": false 00:10:05.646 }, 00:10:05.646 "method": "bdev_nvme_attach_controller" 00:10:05.646 }' 00:10:05.646 11:28:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:10:05.646 11:28:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:10:05.646 "params": { 00:10:05.646 "name": "Nvme1", 00:10:05.646 "trtype": "tcp", 00:10:05.646 "traddr": "10.0.0.2", 00:10:05.646 "adrfam": "ipv4", 00:10:05.646 "trsvcid": "4420", 00:10:05.646 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:05.646 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:05.646 "hdgst": false, 00:10:05.646 "ddgst": false 00:10:05.646 }, 00:10:05.646 "method": "bdev_nvme_attach_controller" 00:10:05.646 }' 00:10:05.646 [2024-07-15 11:28:43.061239] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:10:05.646 [2024-07-15 11:28:43.061315] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:10:05.646 11:28:43 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 74873 00:10:05.646 [2024-07-15 11:28:43.079292] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:10:05.646 [2024-07-15 11:28:43.079370] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:10:05.646 [2024-07-15 11:28:43.091993] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:10:05.646 [2024-07-15 11:28:43.092122] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:10:05.903 [2024-07-15 11:28:43.132988] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:10:05.903 [2024-07-15 11:28:43.133081] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:10:05.903 [2024-07-15 11:28:43.233803] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:05.903 [2024-07-15 11:28:43.276254] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:05.903 [2024-07-15 11:28:43.288476] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:10:05.903 [2024-07-15 11:28:43.312111] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:05.903 [2024-07-15 11:28:43.331697] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:10:05.903 [2024-07-15 11:28:43.351466] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:06.160 [2024-07-15 11:28:43.386297] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:10:06.160 Running I/O for 1 seconds... 00:10:06.160 [2024-07-15 11:28:43.438793] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:10:06.160 Running I/O for 1 seconds... 00:10:06.160 Running I/O for 1 seconds... 00:10:06.160 Running I/O for 1 seconds... 00:10:07.091 00:10:07.091 Latency(us) 00:10:07.091 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:07.091 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:10:07.091 Nvme1n1 : 1.02 6262.01 24.46 0.00 0.00 20160.27 7566.43 34078.72 00:10:07.091 =================================================================================================================== 00:10:07.091 Total : 6262.01 24.46 0.00 0.00 20160.27 7566.43 34078.72 00:10:07.091 00:10:07.091 Latency(us) 00:10:07.091 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:07.091 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:10:07.091 Nvme1n1 : 1.00 184164.41 719.39 0.00 0.00 692.15 283.00 1333.06 00:10:07.091 =================================================================================================================== 00:10:07.091 Total : 184164.41 719.39 0.00 0.00 692.15 283.00 1333.06 00:10:07.091 00:10:07.091 Latency(us) 00:10:07.091 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:07.091 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:10:07.091 Nvme1n1 : 1.01 8105.86 31.66 0.00 0.00 15699.55 8340.95 24784.52 00:10:07.091 =================================================================================================================== 00:10:07.091 Total : 8105.86 31.66 0.00 0.00 15699.55 8340.95 24784.52 00:10:07.350 00:10:07.350 Latency(us) 00:10:07.350 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:07.350 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:10:07.350 Nvme1n1 : 1.00 6662.29 26.02 0.00 0.00 19159.46 4825.83 47185.92 00:10:07.350 =================================================================================================================== 00:10:07.350 Total : 6662.29 26.02 0.00 0.00 19159.46 4825.83 47185.92 00:10:07.350 11:28:44 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 74875 00:10:07.350 11:28:44 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 74877 00:10:07.350 11:28:44 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 74879 00:10:07.350 11:28:44 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:07.350 11:28:44 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:07.350 11:28:44 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:07.350 11:28:44 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:07.350 11:28:44 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:10:07.350 11:28:44 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:10:07.350 11:28:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:07.350 11:28:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # sync 00:10:07.350 11:28:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:07.350 11:28:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@120 -- # set +e 00:10:07.350 11:28:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:07.350 11:28:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:07.350 rmmod nvme_tcp 00:10:07.350 rmmod nvme_fabrics 00:10:07.350 rmmod nvme_keyring 00:10:07.608 11:28:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:07.608 11:28:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set -e 00:10:07.608 11:28:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # return 0 00:10:07.608 11:28:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@489 -- # '[' -n 74828 ']' 00:10:07.608 11:28:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@490 -- # killprocess 74828 00:10:07.608 11:28:44 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@948 -- # '[' -z 74828 ']' 00:10:07.608 11:28:44 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@952 -- # kill -0 74828 00:10:07.608 11:28:44 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@953 -- # uname 00:10:07.608 11:28:44 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:10:07.608 11:28:44 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 74828 00:10:07.608 11:28:44 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:10:07.608 11:28:44 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:10:07.608 killing process with pid 74828 00:10:07.608 11:28:44 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@966 -- # echo 'killing process with pid 74828' 00:10:07.608 11:28:44 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@967 -- # kill 74828 00:10:07.608 11:28:44 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # wait 74828 00:10:07.608 11:28:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:07.608 11:28:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:07.608 11:28:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:07.608 11:28:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:07.609 11:28:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:07.609 11:28:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:07.609 11:28:45 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:07.609 11:28:45 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:07.609 11:28:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:10:07.609 00:10:07.609 real 0m2.970s 00:10:07.609 user 0m13.448s 00:10:07.609 sys 0m1.649s 00:10:07.609 11:28:45 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:07.609 11:28:45 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:07.609 ************************************ 00:10:07.609 END TEST nvmf_bdev_io_wait 00:10:07.609 ************************************ 00:10:07.867 11:28:45 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:10:07.867 11:28:45 nvmf_tcp -- nvmf/nvmf.sh@51 -- # run_test nvmf_queue_depth /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:10:07.867 11:28:45 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:10:07.867 11:28:45 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:07.867 11:28:45 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:07.867 ************************************ 00:10:07.867 START TEST nvmf_queue_depth 00:10:07.867 ************************************ 00:10:07.867 11:28:45 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:10:07.867 * Looking for test storage... 00:10:07.867 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:10:07.867 11:28:45 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:07.867 11:28:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:10:07.867 11:28:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:07.867 11:28:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:07.867 11:28:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:07.867 11:28:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:07.867 11:28:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:07.867 11:28:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:07.867 11:28:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:07.867 11:28:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:07.867 11:28:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:07.867 11:28:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:07.867 11:28:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:891080d4-f96c-4735-b9e2-e3ce9892e421 00:10:07.867 11:28:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=891080d4-f96c-4735-b9e2-e3ce9892e421 00:10:07.867 11:28:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:07.867 11:28:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:07.867 11:28:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:10:07.867 11:28:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:07.867 11:28:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:07.867 11:28:45 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:07.867 11:28:45 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:07.867 11:28:45 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:07.867 11:28:45 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:07.867 11:28:45 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:07.867 11:28:45 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:07.867 11:28:45 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:10:07.867 11:28:45 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:07.867 11:28:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@47 -- # : 0 00:10:07.867 11:28:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:07.867 11:28:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:07.867 11:28:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:07.867 11:28:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:07.867 11:28:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:07.867 11:28:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:07.867 11:28:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:07.867 11:28:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:07.867 11:28:45 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:10:07.867 11:28:45 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:10:07.867 11:28:45 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:10:07.867 11:28:45 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:10:07.867 11:28:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:10:07.867 11:28:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:07.867 11:28:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:07.867 11:28:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:07.867 11:28:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:07.867 11:28:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:07.867 11:28:45 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:07.867 11:28:45 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:07.867 11:28:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:10:07.867 11:28:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:10:07.867 11:28:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:10:07.867 11:28:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:10:07.867 11:28:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:10:07.867 11:28:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@432 -- # nvmf_veth_init 00:10:07.867 11:28:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:07.867 11:28:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:07.867 11:28:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:10:07.867 11:28:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:10:07.868 11:28:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:10:07.868 11:28:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:10:07.868 11:28:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:10:07.868 11:28:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:07.868 11:28:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:10:07.868 11:28:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:10:07.868 11:28:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:10:07.868 11:28:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:10:07.868 11:28:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:10:07.868 11:28:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:10:07.868 Cannot find device "nvmf_tgt_br" 00:10:07.868 11:28:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@155 -- # true 00:10:07.868 11:28:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:10:07.868 Cannot find device "nvmf_tgt_br2" 00:10:07.868 11:28:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@156 -- # true 00:10:07.868 11:28:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:10:07.868 11:28:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:10:07.868 Cannot find device "nvmf_tgt_br" 00:10:07.868 11:28:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@158 -- # true 00:10:07.868 11:28:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:10:07.868 Cannot find device "nvmf_tgt_br2" 00:10:07.868 11:28:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@159 -- # true 00:10:07.868 11:28:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:10:07.868 11:28:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:10:07.868 11:28:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:07.868 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:07.868 11:28:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@162 -- # true 00:10:07.868 11:28:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:07.868 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:07.868 11:28:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@163 -- # true 00:10:07.868 11:28:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:10:07.868 11:28:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:10:07.868 11:28:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:10:08.126 11:28:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:10:08.126 11:28:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:10:08.126 11:28:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:10:08.126 11:28:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:10:08.126 11:28:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:10:08.126 11:28:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:10:08.126 11:28:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:10:08.126 11:28:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:10:08.126 11:28:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:10:08.126 11:28:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:10:08.126 11:28:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:10:08.126 11:28:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:10:08.126 11:28:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:10:08.126 11:28:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:10:08.126 11:28:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:10:08.126 11:28:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:10:08.126 11:28:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:10:08.126 11:28:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:10:08.126 11:28:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:10:08.126 11:28:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:10:08.126 11:28:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:10:08.126 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:08.126 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.111 ms 00:10:08.126 00:10:08.126 --- 10.0.0.2 ping statistics --- 00:10:08.126 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:08.126 rtt min/avg/max/mdev = 0.111/0.111/0.111/0.000 ms 00:10:08.126 11:28:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:10:08.126 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:10:08.126 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.071 ms 00:10:08.126 00:10:08.126 --- 10.0.0.3 ping statistics --- 00:10:08.126 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:08.126 rtt min/avg/max/mdev = 0.071/0.071/0.071/0.000 ms 00:10:08.126 11:28:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:10:08.126 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:08.126 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.050 ms 00:10:08.126 00:10:08.126 --- 10.0.0.1 ping statistics --- 00:10:08.126 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:08.126 rtt min/avg/max/mdev = 0.050/0.050/0.050/0.000 ms 00:10:08.126 11:28:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:08.126 11:28:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@433 -- # return 0 00:10:08.126 11:28:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:08.126 11:28:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:08.126 11:28:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:10:08.126 11:28:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:10:08.126 11:28:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:08.126 11:28:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:10:08.126 11:28:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:10:08.126 11:28:45 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:10:08.126 11:28:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:08.126 11:28:45 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@722 -- # xtrace_disable 00:10:08.126 11:28:45 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:08.126 11:28:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@481 -- # nvmfpid=75080 00:10:08.126 11:28:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@482 -- # waitforlisten 75080 00:10:08.126 11:28:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:10:08.126 11:28:45 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@829 -- # '[' -z 75080 ']' 00:10:08.126 11:28:45 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:08.126 11:28:45 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:08.126 11:28:45 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:08.126 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:08.126 11:28:45 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:08.126 11:28:45 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:08.384 [2024-07-15 11:28:45.635791] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:10:08.384 [2024-07-15 11:28:45.635882] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:08.384 [2024-07-15 11:28:45.772127] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:08.384 [2024-07-15 11:28:45.836211] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:08.384 [2024-07-15 11:28:45.836271] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:08.384 [2024-07-15 11:28:45.836286] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:08.384 [2024-07-15 11:28:45.836298] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:08.384 [2024-07-15 11:28:45.836310] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:08.384 [2024-07-15 11:28:45.836341] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:09.321 11:28:46 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:09.321 11:28:46 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@862 -- # return 0 00:10:09.321 11:28:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:09.321 11:28:46 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@728 -- # xtrace_disable 00:10:09.321 11:28:46 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:09.321 11:28:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:09.321 11:28:46 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:09.321 11:28:46 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:09.321 11:28:46 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:09.321 [2024-07-15 11:28:46.621986] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:09.321 11:28:46 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:09.321 11:28:46 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:10:09.321 11:28:46 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:09.321 11:28:46 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:09.321 Malloc0 00:10:09.321 11:28:46 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:09.321 11:28:46 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:10:09.321 11:28:46 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:09.321 11:28:46 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:09.321 11:28:46 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:09.321 11:28:46 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:09.321 11:28:46 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:09.321 11:28:46 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:09.321 11:28:46 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:09.321 11:28:46 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:09.321 11:28:46 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:09.321 11:28:46 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:09.321 [2024-07-15 11:28:46.672473] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:09.321 11:28:46 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:09.321 11:28:46 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=75130 00:10:09.321 11:28:46 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:10:09.321 11:28:46 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:10:09.321 11:28:46 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 75130 /var/tmp/bdevperf.sock 00:10:09.321 11:28:46 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@829 -- # '[' -z 75130 ']' 00:10:09.321 11:28:46 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:10:09.321 11:28:46 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:09.321 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:10:09.321 11:28:46 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:10:09.321 11:28:46 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:09.321 11:28:46 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:09.321 [2024-07-15 11:28:46.726278] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:10:09.321 [2024-07-15 11:28:46.726362] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75130 ] 00:10:09.580 [2024-07-15 11:28:46.856639] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:09.580 [2024-07-15 11:28:46.915236] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:09.580 11:28:46 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:09.580 11:28:46 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@862 -- # return 0 00:10:09.580 11:28:46 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:10:09.580 11:28:46 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:09.580 11:28:46 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:09.838 NVMe0n1 00:10:09.838 11:28:47 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:09.838 11:28:47 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:10:09.838 Running I/O for 10 seconds... 00:10:19.871 00:10:19.871 Latency(us) 00:10:19.871 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:19.871 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:10:19.871 Verification LBA range: start 0x0 length 0x4000 00:10:19.871 NVMe0n1 : 10.07 8126.11 31.74 0.00 0.00 125476.73 23831.27 117249.86 00:10:19.871 =================================================================================================================== 00:10:19.871 Total : 8126.11 31.74 0.00 0.00 125476.73 23831.27 117249.86 00:10:19.871 0 00:10:19.871 11:28:57 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 75130 00:10:19.871 11:28:57 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@948 -- # '[' -z 75130 ']' 00:10:19.871 11:28:57 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@952 -- # kill -0 75130 00:10:19.871 11:28:57 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # uname 00:10:19.871 11:28:57 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:10:19.871 11:28:57 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 75130 00:10:20.151 11:28:57 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:10:20.151 11:28:57 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:10:20.151 11:28:57 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@966 -- # echo 'killing process with pid 75130' 00:10:20.151 killing process with pid 75130 00:10:20.151 11:28:57 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@967 -- # kill 75130 00:10:20.151 Received shutdown signal, test time was about 10.000000 seconds 00:10:20.151 00:10:20.151 Latency(us) 00:10:20.151 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:20.151 =================================================================================================================== 00:10:20.151 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:10:20.151 11:28:57 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@972 -- # wait 75130 00:10:20.151 11:28:57 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:10:20.151 11:28:57 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:10:20.151 11:28:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:20.151 11:28:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@117 -- # sync 00:10:20.151 11:28:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:20.151 11:28:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@120 -- # set +e 00:10:20.151 11:28:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:20.152 11:28:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:20.152 rmmod nvme_tcp 00:10:20.152 rmmod nvme_fabrics 00:10:20.152 rmmod nvme_keyring 00:10:20.152 11:28:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:20.152 11:28:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@124 -- # set -e 00:10:20.152 11:28:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@125 -- # return 0 00:10:20.152 11:28:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@489 -- # '[' -n 75080 ']' 00:10:20.152 11:28:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@490 -- # killprocess 75080 00:10:20.152 11:28:57 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@948 -- # '[' -z 75080 ']' 00:10:20.152 11:28:57 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@952 -- # kill -0 75080 00:10:20.152 11:28:57 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # uname 00:10:20.152 11:28:57 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:10:20.152 11:28:57 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 75080 00:10:20.152 11:28:57 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:10:20.152 11:28:57 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:10:20.152 killing process with pid 75080 00:10:20.152 11:28:57 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@966 -- # echo 'killing process with pid 75080' 00:10:20.152 11:28:57 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@967 -- # kill 75080 00:10:20.152 11:28:57 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@972 -- # wait 75080 00:10:20.411 11:28:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:20.411 11:28:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:20.411 11:28:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:20.411 11:28:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:20.411 11:28:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:20.411 11:28:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:20.411 11:28:57 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:20.411 11:28:57 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:20.411 11:28:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:10:20.411 00:10:20.411 real 0m12.708s 00:10:20.411 user 0m21.702s 00:10:20.411 sys 0m1.874s 00:10:20.411 11:28:57 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:20.411 11:28:57 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:20.411 ************************************ 00:10:20.411 END TEST nvmf_queue_depth 00:10:20.411 ************************************ 00:10:20.411 11:28:57 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:10:20.411 11:28:57 nvmf_tcp -- nvmf/nvmf.sh@52 -- # run_test nvmf_target_multipath /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:10:20.411 11:28:57 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:10:20.411 11:28:57 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:20.411 11:28:57 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:20.411 ************************************ 00:10:20.411 START TEST nvmf_target_multipath 00:10:20.411 ************************************ 00:10:20.411 11:28:57 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:10:20.670 * Looking for test storage... 00:10:20.670 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:10:20.670 11:28:57 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:20.670 11:28:57 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:10:20.670 11:28:57 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:20.670 11:28:57 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:20.670 11:28:57 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:20.670 11:28:57 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:20.670 11:28:57 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:20.670 11:28:57 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:20.670 11:28:57 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:20.670 11:28:57 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:20.670 11:28:57 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:20.670 11:28:57 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:20.670 11:28:57 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:891080d4-f96c-4735-b9e2-e3ce9892e421 00:10:20.670 11:28:57 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=891080d4-f96c-4735-b9e2-e3ce9892e421 00:10:20.670 11:28:57 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:20.670 11:28:57 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:20.670 11:28:57 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:10:20.670 11:28:57 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:20.670 11:28:57 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:20.670 11:28:57 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:20.670 11:28:57 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:20.670 11:28:57 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:20.670 11:28:57 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:20.670 11:28:57 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:20.670 11:28:57 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:20.670 11:28:57 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:10:20.670 11:28:57 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:20.670 11:28:57 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@47 -- # : 0 00:10:20.670 11:28:57 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:20.670 11:28:57 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:20.670 11:28:57 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:20.670 11:28:57 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:20.670 11:28:57 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:20.670 11:28:57 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:20.670 11:28:57 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:20.670 11:28:57 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:20.670 11:28:57 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:20.670 11:28:57 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:20.670 11:28:57 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:10:20.670 11:28:57 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:20.670 11:28:57 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:10:20.670 11:28:57 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:10:20.670 11:28:57 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:20.670 11:28:57 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:20.670 11:28:57 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:20.670 11:28:57 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:20.670 11:28:57 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:20.670 11:28:57 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:20.670 11:28:57 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:20.670 11:28:57 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:10:20.670 11:28:57 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:10:20.670 11:28:57 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:10:20.670 11:28:57 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:10:20.670 11:28:57 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:10:20.670 11:28:57 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@432 -- # nvmf_veth_init 00:10:20.670 11:28:57 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:20.670 11:28:57 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:20.670 11:28:57 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:10:20.670 11:28:57 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:10:20.670 11:28:57 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:10:20.670 11:28:57 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:10:20.670 11:28:57 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:10:20.670 11:28:57 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:20.670 11:28:57 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:10:20.670 11:28:57 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:10:20.670 11:28:57 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:10:20.671 11:28:57 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:10:20.671 11:28:57 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:10:20.671 11:28:57 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:10:20.671 Cannot find device "nvmf_tgt_br" 00:10:20.671 11:28:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@155 -- # true 00:10:20.671 11:28:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:10:20.671 Cannot find device "nvmf_tgt_br2" 00:10:20.671 11:28:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@156 -- # true 00:10:20.671 11:28:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:10:20.671 11:28:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:10:20.671 Cannot find device "nvmf_tgt_br" 00:10:20.671 11:28:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@158 -- # true 00:10:20.671 11:28:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:10:20.671 Cannot find device "nvmf_tgt_br2" 00:10:20.671 11:28:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@159 -- # true 00:10:20.671 11:28:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:10:20.671 11:28:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:10:20.671 11:28:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:20.671 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:20.671 11:28:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@162 -- # true 00:10:20.671 11:28:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:20.671 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:20.671 11:28:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@163 -- # true 00:10:20.671 11:28:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:10:20.671 11:28:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:10:20.671 11:28:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:10:20.671 11:28:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:10:20.671 11:28:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:10:20.929 11:28:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:10:20.929 11:28:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:10:20.929 11:28:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:10:20.929 11:28:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:10:20.929 11:28:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:10:20.929 11:28:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:10:20.929 11:28:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:10:20.929 11:28:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:10:20.929 11:28:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:10:20.930 11:28:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:10:20.930 11:28:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:10:20.930 11:28:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:10:20.930 11:28:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:10:20.930 11:28:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:10:20.930 11:28:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:10:20.930 11:28:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:10:20.930 11:28:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:10:20.930 11:28:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:10:20.930 11:28:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:10:20.930 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:20.930 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.077 ms 00:10:20.930 00:10:20.930 --- 10.0.0.2 ping statistics --- 00:10:20.930 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:20.930 rtt min/avg/max/mdev = 0.077/0.077/0.077/0.000 ms 00:10:20.930 11:28:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:10:20.930 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:10:20.930 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.044 ms 00:10:20.930 00:10:20.930 --- 10.0.0.3 ping statistics --- 00:10:20.930 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:20.930 rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms 00:10:20.930 11:28:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:10:20.930 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:20.930 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.034 ms 00:10:20.930 00:10:20.930 --- 10.0.0.1 ping statistics --- 00:10:20.930 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:20.930 rtt min/avg/max/mdev = 0.034/0.034/0.034/0.000 ms 00:10:20.930 11:28:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:20.930 11:28:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@433 -- # return 0 00:10:20.930 11:28:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:20.930 11:28:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:20.930 11:28:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:10:20.930 11:28:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:10:20.930 11:28:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:20.930 11:28:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:10:20.930 11:28:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:10:20.930 11:28:58 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z 10.0.0.3 ']' 00:10:20.930 11:28:58 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@51 -- # '[' tcp '!=' tcp ']' 00:10:20.930 11:28:58 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@57 -- # nvmfappstart -m 0xF 00:10:20.930 11:28:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:20.930 11:28:58 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@722 -- # xtrace_disable 00:10:20.930 11:28:58 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:10:20.930 11:28:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@481 -- # nvmfpid=75449 00:10:20.930 11:28:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:20.930 11:28:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@482 -- # waitforlisten 75449 00:10:20.930 11:28:58 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@829 -- # '[' -z 75449 ']' 00:10:20.930 11:28:58 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:20.930 11:28:58 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:20.930 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:20.930 11:28:58 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:20.930 11:28:58 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:20.930 11:28:58 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:10:20.930 [2024-07-15 11:28:58.386338] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:10:20.930 [2024-07-15 11:28:58.386434] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:21.188 [2024-07-15 11:28:58.528196] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:21.188 [2024-07-15 11:28:58.600930] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:21.188 [2024-07-15 11:28:58.601224] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:21.188 [2024-07-15 11:28:58.601249] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:21.188 [2024-07-15 11:28:58.601259] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:21.188 [2024-07-15 11:28:58.601270] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:21.188 [2024-07-15 11:28:58.602521] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:21.188 [2024-07-15 11:28:58.602653] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:10:21.188 [2024-07-15 11:28:58.603450] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:21.188 [2024-07-15 11:28:58.603439] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:10:22.123 11:28:59 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:22.123 11:28:59 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@862 -- # return 0 00:10:22.123 11:28:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:22.123 11:28:59 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@728 -- # xtrace_disable 00:10:22.123 11:28:59 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:10:22.123 11:28:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:22.123 11:28:59 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:10:22.381 [2024-07-15 11:28:59.694977] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:22.381 11:28:59 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:10:22.638 Malloc0 00:10:22.638 11:28:59 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -r 00:10:22.896 11:29:00 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:23.155 11:29:00 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:23.413 [2024-07-15 11:29:00.814109] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:23.413 11:29:00 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:10:23.671 [2024-07-15 11:29:01.058378] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:10:23.671 11:29:01 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@67 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:891080d4-f96c-4735-b9e2-e3ce9892e421 --hostid=891080d4-f96c-4735-b9e2-e3ce9892e421 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 -g -G 00:10:23.929 11:29:01 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@68 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:891080d4-f96c-4735-b9e2-e3ce9892e421 --hostid=891080d4-f96c-4735-b9e2-e3ce9892e421 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 -g -G 00:10:24.187 11:29:01 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@69 -- # waitforserial SPDKISFASTANDAWESOME 00:10:24.187 11:29:01 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1198 -- # local i=0 00:10:24.187 11:29:01 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:10:24.187 11:29:01 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:10:24.187 11:29:01 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1205 -- # sleep 2 00:10:26.087 11:29:03 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:10:26.087 11:29:03 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:10:26.087 11:29:03 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:10:26.087 11:29:03 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:10:26.087 11:29:03 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:10:26.087 11:29:03 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1208 -- # return 0 00:10:26.087 11:29:03 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@72 -- # get_subsystem nqn.2016-06.io.spdk:cnode1 SPDKISFASTANDAWESOME 00:10:26.087 11:29:03 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@34 -- # local nqn=nqn.2016-06.io.spdk:cnode1 serial=SPDKISFASTANDAWESOME s 00:10:26.087 11:29:03 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@36 -- # for s in /sys/class/nvme-subsystem/* 00:10:26.087 11:29:03 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@37 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:10:26.087 11:29:03 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@37 -- # [[ SPDKISFASTANDAWESOME == \S\P\D\K\I\S\F\A\S\T\A\N\D\A\W\E\S\O\M\E ]] 00:10:26.087 11:29:03 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@38 -- # echo nvme-subsys0 00:10:26.087 11:29:03 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@38 -- # return 0 00:10:26.087 11:29:03 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@72 -- # subsystem=nvme-subsys0 00:10:26.087 11:29:03 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@73 -- # paths=(/sys/class/nvme-subsystem/$subsystem/nvme*/nvme*c*) 00:10:26.087 11:29:03 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@74 -- # paths=("${paths[@]##*/}") 00:10:26.087 11:29:03 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@76 -- # (( 2 == 2 )) 00:10:26.087 11:29:03 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@78 -- # p0=nvme0c0n1 00:10:26.087 11:29:03 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@79 -- # p1=nvme0c1n1 00:10:26.087 11:29:03 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@81 -- # check_ana_state nvme0c0n1 optimized 00:10:26.087 11:29:03 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:10:26.087 11:29:03 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:10:26.087 11:29:03 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:10:26.087 11:29:03 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:10:26.087 11:29:03 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:10:26.087 11:29:03 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@82 -- # check_ana_state nvme0c1n1 optimized 00:10:26.087 11:29:03 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:10:26.087 11:29:03 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:10:26.087 11:29:03 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:10:26.087 11:29:03 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:10:26.087 11:29:03 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:10:26.087 11:29:03 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@85 -- # echo numa 00:10:26.087 11:29:03 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@88 -- # fio_pid=75587 00:10:26.087 11:29:03 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:10:26.087 11:29:03 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@90 -- # sleep 1 00:10:26.087 [global] 00:10:26.087 thread=1 00:10:26.087 invalidate=1 00:10:26.087 rw=randrw 00:10:26.087 time_based=1 00:10:26.087 runtime=6 00:10:26.087 ioengine=libaio 00:10:26.087 direct=1 00:10:26.087 bs=4096 00:10:26.087 iodepth=128 00:10:26.087 norandommap=0 00:10:26.087 numjobs=1 00:10:26.087 00:10:26.087 verify_dump=1 00:10:26.087 verify_backlog=512 00:10:26.087 verify_state_save=0 00:10:26.087 do_verify=1 00:10:26.087 verify=crc32c-intel 00:10:26.087 [job0] 00:10:26.087 filename=/dev/nvme0n1 00:10:26.087 Could not set queue depth (nvme0n1) 00:10:26.345 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:26.345 fio-3.35 00:10:26.345 Starting 1 thread 00:10:27.281 11:29:04 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:10:27.539 11:29:04 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:10:27.797 11:29:05 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@95 -- # check_ana_state nvme0c0n1 inaccessible 00:10:27.797 11:29:05 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:10:27.797 11:29:05 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:10:27.797 11:29:05 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:10:27.797 11:29:05 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:10:27.797 11:29:05 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:10:27.797 11:29:05 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@96 -- # check_ana_state nvme0c1n1 non-optimized 00:10:27.797 11:29:05 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:10:27.797 11:29:05 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:10:27.797 11:29:05 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:10:27.797 11:29:05 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:10:27.797 11:29:05 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:10:27.797 11:29:05 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # sleep 1s 00:10:28.731 11:29:06 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:10:28.731 11:29:06 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:10:28.731 11:29:06 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:10:28.731 11:29:06 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:10:28.989 11:29:06 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:10:29.248 11:29:06 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@101 -- # check_ana_state nvme0c0n1 non-optimized 00:10:29.248 11:29:06 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:10:29.248 11:29:06 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:10:29.248 11:29:06 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:10:29.248 11:29:06 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:10:29.248 11:29:06 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:10:29.248 11:29:06 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@102 -- # check_ana_state nvme0c1n1 inaccessible 00:10:29.248 11:29:06 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:10:29.248 11:29:06 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:10:29.248 11:29:06 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:10:29.248 11:29:06 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:10:29.248 11:29:06 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:10:29.248 11:29:06 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # sleep 1s 00:10:30.623 11:29:07 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:10:30.623 11:29:07 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:10:30.623 11:29:07 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:10:30.623 11:29:07 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@104 -- # wait 75587 00:10:32.518 00:10:32.518 job0: (groupid=0, jobs=1): err= 0: pid=75608: Mon Jul 15 11:29:09 2024 00:10:32.518 read: IOPS=10.6k, BW=41.6MiB/s (43.6MB/s)(250MiB/6004msec) 00:10:32.518 slat (usec): min=3, max=10005, avg=53.59, stdev=239.37 00:10:32.518 clat (usec): min=755, max=17401, avg=8153.25, stdev=1402.57 00:10:32.518 lat (usec): min=780, max=17419, avg=8206.84, stdev=1413.14 00:10:32.518 clat percentiles (usec): 00:10:32.518 | 1.00th=[ 4752], 5.00th=[ 6194], 10.00th=[ 6849], 20.00th=[ 7308], 00:10:32.518 | 30.00th=[ 7504], 40.00th=[ 7701], 50.00th=[ 7963], 60.00th=[ 8291], 00:10:32.518 | 70.00th=[ 8586], 80.00th=[ 8979], 90.00th=[ 9765], 95.00th=[10814], 00:10:32.518 | 99.00th=[12387], 99.50th=[13173], 99.90th=[15926], 99.95th=[16188], 00:10:32.518 | 99.99th=[17433] 00:10:32.518 bw ( KiB/s): min= 7744, max=29688, per=53.09%, avg=22610.91, stdev=6249.57, samples=11 00:10:32.518 iops : min= 1936, max= 7422, avg=5652.73, stdev=1562.39, samples=11 00:10:32.518 write: IOPS=6446, BW=25.2MiB/s (26.4MB/s)(134MiB/5321msec); 0 zone resets 00:10:32.518 slat (usec): min=5, max=3336, avg=64.28, stdev=154.84 00:10:32.518 clat (usec): min=546, max=16405, avg=6999.80, stdev=1205.89 00:10:32.518 lat (usec): min=626, max=16444, avg=7064.08, stdev=1210.98 00:10:32.518 clat percentiles (usec): 00:10:32.518 | 1.00th=[ 3720], 5.00th=[ 4948], 10.00th=[ 5866], 20.00th=[ 6325], 00:10:32.518 | 30.00th=[ 6587], 40.00th=[ 6783], 50.00th=[ 6980], 60.00th=[ 7177], 00:10:32.518 | 70.00th=[ 7373], 80.00th=[ 7635], 90.00th=[ 8225], 95.00th=[ 9110], 00:10:32.518 | 99.00th=[10421], 99.50th=[10945], 99.90th=[13698], 99.95th=[15008], 00:10:32.518 | 99.99th=[16319] 00:10:32.518 bw ( KiB/s): min= 8192, max=29048, per=87.80%, avg=22642.91, stdev=5961.02, samples=11 00:10:32.518 iops : min= 2048, max= 7262, avg=5660.73, stdev=1490.25, samples=11 00:10:32.518 lat (usec) : 750=0.01%, 1000=0.02% 00:10:32.518 lat (msec) : 2=0.08%, 4=0.68%, 10=92.90%, 20=6.31% 00:10:32.518 cpu : usr=5.76%, sys=24.42%, ctx=6492, majf=0, minf=108 00:10:32.518 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:10:32.518 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:32.518 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:32.518 issued rwts: total=63924,34304,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:32.518 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:32.518 00:10:32.518 Run status group 0 (all jobs): 00:10:32.518 READ: bw=41.6MiB/s (43.6MB/s), 41.6MiB/s-41.6MiB/s (43.6MB/s-43.6MB/s), io=250MiB (262MB), run=6004-6004msec 00:10:32.518 WRITE: bw=25.2MiB/s (26.4MB/s), 25.2MiB/s-25.2MiB/s (26.4MB/s-26.4MB/s), io=134MiB (141MB), run=5321-5321msec 00:10:32.518 00:10:32.518 Disk stats (read/write): 00:10:32.518 nvme0n1: ios=63074/33709, merge=0/0, ticks=479139/219771, in_queue=698910, util=98.53% 00:10:32.518 11:29:09 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@106 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:10:32.774 11:29:10 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n optimized 00:10:33.031 11:29:10 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@109 -- # check_ana_state nvme0c0n1 optimized 00:10:33.031 11:29:10 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:10:33.031 11:29:10 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:10:33.031 11:29:10 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:10:33.031 11:29:10 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:10:33.031 11:29:10 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:10:33.031 11:29:10 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@110 -- # check_ana_state nvme0c1n1 optimized 00:10:33.031 11:29:10 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:10:33.031 11:29:10 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:10:33.031 11:29:10 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:10:33.031 11:29:10 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:10:33.031 11:29:10 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \o\p\t\i\m\i\z\e\d ]] 00:10:33.031 11:29:10 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # sleep 1s 00:10:34.007 11:29:11 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:10:34.007 11:29:11 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:10:34.007 11:29:11 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:10:34.007 11:29:11 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@113 -- # echo round-robin 00:10:34.007 11:29:11 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@116 -- # fio_pid=75743 00:10:34.007 11:29:11 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:10:34.007 11:29:11 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@118 -- # sleep 1 00:10:34.007 [global] 00:10:34.007 thread=1 00:10:34.007 invalidate=1 00:10:34.007 rw=randrw 00:10:34.007 time_based=1 00:10:34.007 runtime=6 00:10:34.007 ioengine=libaio 00:10:34.007 direct=1 00:10:34.007 bs=4096 00:10:34.007 iodepth=128 00:10:34.007 norandommap=0 00:10:34.007 numjobs=1 00:10:34.007 00:10:34.007 verify_dump=1 00:10:34.007 verify_backlog=512 00:10:34.007 verify_state_save=0 00:10:34.007 do_verify=1 00:10:34.007 verify=crc32c-intel 00:10:34.007 [job0] 00:10:34.007 filename=/dev/nvme0n1 00:10:34.007 Could not set queue depth (nvme0n1) 00:10:34.265 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:34.265 fio-3.35 00:10:34.265 Starting 1 thread 00:10:35.198 11:29:12 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:10:35.457 11:29:12 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:10:35.716 11:29:12 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@123 -- # check_ana_state nvme0c0n1 inaccessible 00:10:35.716 11:29:12 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:10:35.716 11:29:12 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:10:35.716 11:29:12 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:10:35.716 11:29:12 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:10:35.716 11:29:13 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:10:35.716 11:29:13 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@124 -- # check_ana_state nvme0c1n1 non-optimized 00:10:35.716 11:29:13 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:10:35.716 11:29:13 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:10:35.716 11:29:13 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:10:35.716 11:29:13 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:10:35.716 11:29:13 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:10:35.716 11:29:13 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # sleep 1s 00:10:36.649 11:29:14 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:10:36.649 11:29:14 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:10:36.649 11:29:14 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:10:36.649 11:29:14 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:10:37.215 11:29:14 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:10:37.473 11:29:14 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@129 -- # check_ana_state nvme0c0n1 non-optimized 00:10:37.473 11:29:14 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:10:37.473 11:29:14 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:10:37.473 11:29:14 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:10:37.473 11:29:14 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:10:37.473 11:29:14 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:10:37.473 11:29:14 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@130 -- # check_ana_state nvme0c1n1 inaccessible 00:10:37.473 11:29:14 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:10:37.473 11:29:14 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:10:37.473 11:29:14 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:10:37.473 11:29:14 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:10:37.473 11:29:14 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:10:37.473 11:29:14 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # sleep 1s 00:10:38.408 11:29:15 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:10:38.408 11:29:15 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:10:38.408 11:29:15 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:10:38.408 11:29:15 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@132 -- # wait 75743 00:10:40.308 00:10:40.308 job0: (groupid=0, jobs=1): err= 0: pid=75764: Mon Jul 15 11:29:17 2024 00:10:40.308 read: IOPS=11.9k, BW=46.4MiB/s (48.6MB/s)(279MiB/6006msec) 00:10:40.308 slat (usec): min=4, max=6606, avg=41.86, stdev=200.65 00:10:40.308 clat (usec): min=296, max=20613, avg=7378.03, stdev=1978.42 00:10:40.308 lat (usec): min=316, max=20629, avg=7419.89, stdev=1994.97 00:10:40.308 clat percentiles (usec): 00:10:40.308 | 1.00th=[ 1663], 5.00th=[ 3851], 10.00th=[ 4752], 20.00th=[ 5866], 00:10:40.308 | 30.00th=[ 6915], 40.00th=[ 7373], 50.00th=[ 7570], 60.00th=[ 7767], 00:10:40.308 | 70.00th=[ 8225], 80.00th=[ 8717], 90.00th=[ 9503], 95.00th=[10290], 00:10:40.308 | 99.00th=[12518], 99.50th=[13173], 99.90th=[15008], 99.95th=[16712], 00:10:40.308 | 99.99th=[19268] 00:10:40.308 bw ( KiB/s): min= 4088, max=43760, per=54.73%, avg=26000.00, stdev=11154.29, samples=12 00:10:40.308 iops : min= 1022, max=10940, avg=6500.00, stdev=2788.57, samples=12 00:10:40.308 write: IOPS=7586, BW=29.6MiB/s (31.1MB/s)(153MiB/5148msec); 0 zone resets 00:10:40.308 slat (usec): min=13, max=4290, avg=54.02, stdev=132.39 00:10:40.308 clat (usec): min=311, max=19136, avg=6044.65, stdev=1955.63 00:10:40.308 lat (usec): min=346, max=19165, avg=6098.68, stdev=1968.80 00:10:40.308 clat percentiles (usec): 00:10:40.308 | 1.00th=[ 1123], 5.00th=[ 2737], 10.00th=[ 3425], 20.00th=[ 4178], 00:10:40.308 | 30.00th=[ 4883], 40.00th=[ 5932], 50.00th=[ 6521], 60.00th=[ 6915], 00:10:40.308 | 70.00th=[ 7177], 80.00th=[ 7504], 90.00th=[ 8029], 95.00th=[ 8717], 00:10:40.308 | 99.00th=[10814], 99.50th=[11338], 99.90th=[12518], 99.95th=[12780], 00:10:40.308 | 99.99th=[17171] 00:10:40.308 bw ( KiB/s): min= 4464, max=43768, per=85.66%, avg=25996.00, stdev=10927.82, samples=12 00:10:40.308 iops : min= 1116, max=10942, avg=6499.00, stdev=2731.95, samples=12 00:10:40.308 lat (usec) : 500=0.05%, 750=0.15%, 1000=0.28% 00:10:40.308 lat (msec) : 2=1.28%, 4=7.99%, 10=85.52%, 20=4.73%, 50=0.01% 00:10:40.308 cpu : usr=6.28%, sys=26.53%, ctx=8510, majf=0, minf=177 00:10:40.308 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.7% 00:10:40.308 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:40.308 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:40.308 issued rwts: total=71326,39056,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:40.308 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:40.308 00:10:40.308 Run status group 0 (all jobs): 00:10:40.308 READ: bw=46.4MiB/s (48.6MB/s), 46.4MiB/s-46.4MiB/s (48.6MB/s-48.6MB/s), io=279MiB (292MB), run=6006-6006msec 00:10:40.308 WRITE: bw=29.6MiB/s (31.1MB/s), 29.6MiB/s-29.6MiB/s (31.1MB/s-31.1MB/s), io=153MiB (160MB), run=5148-5148msec 00:10:40.308 00:10:40.308 Disk stats (read/write): 00:10:40.308 nvme0n1: ios=70416/38428, merge=0/0, ticks=479472/208655, in_queue=688127, util=98.67% 00:10:40.308 11:29:17 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@134 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:40.308 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:10:40.308 11:29:17 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@135 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:40.308 11:29:17 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1219 -- # local i=0 00:10:40.308 11:29:17 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:10:40.308 11:29:17 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:40.308 11:29:17 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:10:40.308 11:29:17 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:40.309 11:29:17 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1231 -- # return 0 00:10:40.309 11:29:17 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:40.567 11:29:17 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@139 -- # rm -f ./local-job0-0-verify.state 00:10:40.567 11:29:17 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@140 -- # rm -f ./local-job1-1-verify.state 00:10:40.567 11:29:17 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@142 -- # trap - SIGINT SIGTERM EXIT 00:10:40.567 11:29:17 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@144 -- # nvmftestfini 00:10:40.567 11:29:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:40.567 11:29:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:10:40.567 11:29:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:40.567 11:29:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:10:40.567 11:29:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:40.567 11:29:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:40.567 rmmod nvme_tcp 00:10:40.567 rmmod nvme_fabrics 00:10:40.567 rmmod nvme_keyring 00:10:40.567 11:29:18 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:40.826 11:29:18 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:10:40.826 11:29:18 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:10:40.826 11:29:18 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n 75449 ']' 00:10:40.826 11:29:18 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@490 -- # killprocess 75449 00:10:40.826 11:29:18 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@948 -- # '[' -z 75449 ']' 00:10:40.826 11:29:18 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@952 -- # kill -0 75449 00:10:40.826 11:29:18 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@953 -- # uname 00:10:40.826 11:29:18 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:10:40.826 11:29:18 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 75449 00:10:40.826 killing process with pid 75449 00:10:40.826 11:29:18 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:10:40.826 11:29:18 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:10:40.826 11:29:18 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@966 -- # echo 'killing process with pid 75449' 00:10:40.826 11:29:18 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@967 -- # kill 75449 00:10:40.826 11:29:18 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@972 -- # wait 75449 00:10:40.826 11:29:18 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:40.826 11:29:18 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:40.826 11:29:18 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:40.826 11:29:18 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:40.826 11:29:18 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:40.826 11:29:18 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:40.826 11:29:18 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:40.826 11:29:18 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:40.826 11:29:18 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:10:40.826 00:10:40.826 real 0m20.435s 00:10:40.826 user 1m20.547s 00:10:40.826 sys 0m6.841s 00:10:40.826 11:29:18 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:40.826 11:29:18 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:10:40.826 ************************************ 00:10:40.826 END TEST nvmf_target_multipath 00:10:40.826 ************************************ 00:10:41.085 11:29:18 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:10:41.085 11:29:18 nvmf_tcp -- nvmf/nvmf.sh@53 -- # run_test nvmf_zcopy /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:10:41.085 11:29:18 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:10:41.085 11:29:18 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:41.085 11:29:18 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:41.085 ************************************ 00:10:41.085 START TEST nvmf_zcopy 00:10:41.085 ************************************ 00:10:41.085 11:29:18 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:10:41.085 * Looking for test storage... 00:10:41.085 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:10:41.085 11:29:18 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:41.085 11:29:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:10:41.085 11:29:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:41.085 11:29:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:41.085 11:29:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:41.085 11:29:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:41.085 11:29:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:41.085 11:29:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:41.085 11:29:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:41.085 11:29:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:41.085 11:29:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:41.085 11:29:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:41.085 11:29:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:891080d4-f96c-4735-b9e2-e3ce9892e421 00:10:41.085 11:29:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=891080d4-f96c-4735-b9e2-e3ce9892e421 00:10:41.085 11:29:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:41.085 11:29:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:41.085 11:29:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:10:41.085 11:29:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:41.085 11:29:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:41.085 11:29:18 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:41.085 11:29:18 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:41.085 11:29:18 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:41.085 11:29:18 nvmf_tcp.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:41.085 11:29:18 nvmf_tcp.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:41.085 11:29:18 nvmf_tcp.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:41.085 11:29:18 nvmf_tcp.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:10:41.085 11:29:18 nvmf_tcp.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:41.085 11:29:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@47 -- # : 0 00:10:41.085 11:29:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:41.085 11:29:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:41.085 11:29:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:41.085 11:29:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:41.085 11:29:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:41.085 11:29:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:41.085 11:29:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:41.085 11:29:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:41.085 11:29:18 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:10:41.085 11:29:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:10:41.085 11:29:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:41.085 11:29:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:41.085 11:29:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:41.085 11:29:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:41.085 11:29:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:41.085 11:29:18 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:41.085 11:29:18 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:41.085 11:29:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:10:41.085 11:29:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:10:41.085 11:29:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:10:41.085 11:29:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:10:41.085 11:29:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:10:41.085 11:29:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@432 -- # nvmf_veth_init 00:10:41.085 11:29:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:41.085 11:29:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:41.085 11:29:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:10:41.085 11:29:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:10:41.085 11:29:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:10:41.085 11:29:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:10:41.085 11:29:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:10:41.085 11:29:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:41.085 11:29:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:10:41.086 11:29:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:10:41.086 11:29:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:10:41.086 11:29:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:10:41.086 11:29:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:10:41.086 11:29:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:10:41.086 Cannot find device "nvmf_tgt_br" 00:10:41.086 11:29:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@155 -- # true 00:10:41.086 11:29:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:10:41.086 Cannot find device "nvmf_tgt_br2" 00:10:41.086 11:29:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@156 -- # true 00:10:41.086 11:29:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:10:41.086 11:29:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:10:41.086 Cannot find device "nvmf_tgt_br" 00:10:41.086 11:29:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@158 -- # true 00:10:41.086 11:29:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:10:41.086 Cannot find device "nvmf_tgt_br2" 00:10:41.086 11:29:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@159 -- # true 00:10:41.086 11:29:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:10:41.086 11:29:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:10:41.344 11:29:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:41.344 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:41.344 11:29:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@162 -- # true 00:10:41.344 11:29:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:41.344 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:41.344 11:29:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@163 -- # true 00:10:41.344 11:29:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:10:41.344 11:29:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:10:41.344 11:29:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:10:41.344 11:29:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:10:41.344 11:29:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:10:41.344 11:29:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:10:41.344 11:29:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:10:41.344 11:29:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:10:41.344 11:29:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:10:41.344 11:29:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:10:41.344 11:29:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:10:41.344 11:29:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:10:41.344 11:29:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:10:41.344 11:29:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:10:41.344 11:29:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:10:41.344 11:29:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:10:41.344 11:29:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:10:41.344 11:29:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:10:41.344 11:29:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:10:41.344 11:29:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:10:41.344 11:29:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:10:41.344 11:29:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:10:41.344 11:29:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:10:41.344 11:29:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:10:41.344 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:41.344 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.104 ms 00:10:41.344 00:10:41.344 --- 10.0.0.2 ping statistics --- 00:10:41.344 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:41.344 rtt min/avg/max/mdev = 0.104/0.104/0.104/0.000 ms 00:10:41.344 11:29:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:10:41.344 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:10:41.344 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.055 ms 00:10:41.344 00:10:41.344 --- 10.0.0.3 ping statistics --- 00:10:41.344 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:41.344 rtt min/avg/max/mdev = 0.055/0.055/0.055/0.000 ms 00:10:41.344 11:29:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:10:41.344 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:41.344 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.021 ms 00:10:41.344 00:10:41.344 --- 10.0.0.1 ping statistics --- 00:10:41.344 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:41.344 rtt min/avg/max/mdev = 0.021/0.021/0.021/0.000 ms 00:10:41.344 11:29:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:41.344 11:29:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@433 -- # return 0 00:10:41.344 11:29:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:41.344 11:29:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:41.344 11:29:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:10:41.344 11:29:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:10:41.344 11:29:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:41.344 11:29:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:10:41.344 11:29:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:10:41.344 11:29:18 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:10:41.344 11:29:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:41.344 11:29:18 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@722 -- # xtrace_disable 00:10:41.344 11:29:18 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:41.344 11:29:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@481 -- # nvmfpid=76042 00:10:41.344 11:29:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@482 -- # waitforlisten 76042 00:10:41.344 11:29:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:10:41.344 11:29:18 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@829 -- # '[' -z 76042 ']' 00:10:41.344 11:29:18 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:41.344 11:29:18 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:41.344 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:41.344 11:29:18 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:41.344 11:29:18 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:41.344 11:29:18 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:41.603 [2024-07-15 11:29:18.843021] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:10:41.603 [2024-07-15 11:29:18.843124] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:41.603 [2024-07-15 11:29:18.975176] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:41.603 [2024-07-15 11:29:19.033743] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:41.603 [2024-07-15 11:29:19.033795] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:41.603 [2024-07-15 11:29:19.033807] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:41.603 [2024-07-15 11:29:19.033816] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:41.603 [2024-07-15 11:29:19.033823] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:41.603 [2024-07-15 11:29:19.033848] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:41.861 11:29:19 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:41.861 11:29:19 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@862 -- # return 0 00:10:41.861 11:29:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:41.861 11:29:19 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@728 -- # xtrace_disable 00:10:41.861 11:29:19 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:41.861 11:29:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:41.861 11:29:19 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:10:41.861 11:29:19 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:10:41.861 11:29:19 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:41.861 11:29:19 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:41.861 [2024-07-15 11:29:19.168073] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:41.861 11:29:19 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:41.861 11:29:19 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:10:41.861 11:29:19 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:41.861 11:29:19 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:41.861 11:29:19 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:41.861 11:29:19 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:41.861 11:29:19 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:41.861 11:29:19 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:41.861 [2024-07-15 11:29:19.188162] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:41.861 11:29:19 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:41.861 11:29:19 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:10:41.861 11:29:19 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:41.861 11:29:19 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:41.861 11:29:19 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:41.861 11:29:19 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:10:41.861 11:29:19 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:41.861 11:29:19 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:41.861 malloc0 00:10:41.861 11:29:19 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:41.861 11:29:19 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:10:41.861 11:29:19 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:41.861 11:29:19 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:41.861 11:29:19 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:41.861 11:29:19 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:10:41.861 11:29:19 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:10:41.861 11:29:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:10:41.861 11:29:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:10:41.861 11:29:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:10:41.861 11:29:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:10:41.861 { 00:10:41.861 "params": { 00:10:41.861 "name": "Nvme$subsystem", 00:10:41.861 "trtype": "$TEST_TRANSPORT", 00:10:41.861 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:41.861 "adrfam": "ipv4", 00:10:41.861 "trsvcid": "$NVMF_PORT", 00:10:41.861 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:41.861 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:41.861 "hdgst": ${hdgst:-false}, 00:10:41.861 "ddgst": ${ddgst:-false} 00:10:41.861 }, 00:10:41.861 "method": "bdev_nvme_attach_controller" 00:10:41.861 } 00:10:41.861 EOF 00:10:41.861 )") 00:10:41.861 11:29:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:10:41.861 11:29:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:10:41.861 11:29:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:10:41.861 11:29:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:10:41.861 "params": { 00:10:41.861 "name": "Nvme1", 00:10:41.861 "trtype": "tcp", 00:10:41.861 "traddr": "10.0.0.2", 00:10:41.861 "adrfam": "ipv4", 00:10:41.861 "trsvcid": "4420", 00:10:41.861 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:41.861 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:41.861 "hdgst": false, 00:10:41.861 "ddgst": false 00:10:41.861 }, 00:10:41.861 "method": "bdev_nvme_attach_controller" 00:10:41.861 }' 00:10:41.861 [2024-07-15 11:29:19.282247] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:10:41.861 [2024-07-15 11:29:19.282348] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76074 ] 00:10:42.120 [2024-07-15 11:29:19.423503] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:42.120 [2024-07-15 11:29:19.484050] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:42.378 Running I/O for 10 seconds... 00:10:52.370 00:10:52.370 Latency(us) 00:10:52.370 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:52.370 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:10:52.370 Verification LBA range: start 0x0 length 0x1000 00:10:52.370 Nvme1n1 : 10.01 5932.86 46.35 0.00 0.00 21505.00 3768.32 32172.22 00:10:52.370 =================================================================================================================== 00:10:52.370 Total : 5932.86 46.35 0.00 0.00 21505.00 3768.32 32172.22 00:10:52.370 11:29:29 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=76199 00:10:52.370 11:29:29 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:10:52.370 11:29:29 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:52.370 11:29:29 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:10:52.370 11:29:29 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:10:52.370 11:29:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:10:52.370 11:29:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:10:52.370 11:29:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:10:52.370 11:29:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:10:52.370 { 00:10:52.370 "params": { 00:10:52.370 "name": "Nvme$subsystem", 00:10:52.370 "trtype": "$TEST_TRANSPORT", 00:10:52.370 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:52.370 "adrfam": "ipv4", 00:10:52.370 "trsvcid": "$NVMF_PORT", 00:10:52.370 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:52.370 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:52.370 "hdgst": ${hdgst:-false}, 00:10:52.370 "ddgst": ${ddgst:-false} 00:10:52.370 }, 00:10:52.370 "method": "bdev_nvme_attach_controller" 00:10:52.370 } 00:10:52.370 EOF 00:10:52.370 )") 00:10:52.370 11:29:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:10:52.370 [2024-07-15 11:29:29.808996] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.370 [2024-07-15 11:29:29.809037] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.370 11:29:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:10:52.370 11:29:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:10:52.370 11:29:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:10:52.370 "params": { 00:10:52.370 "name": "Nvme1", 00:10:52.370 "trtype": "tcp", 00:10:52.370 "traddr": "10.0.0.2", 00:10:52.370 "adrfam": "ipv4", 00:10:52.370 "trsvcid": "4420", 00:10:52.370 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:52.370 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:52.370 "hdgst": false, 00:10:52.370 "ddgst": false 00:10:52.370 }, 00:10:52.370 "method": "bdev_nvme_attach_controller" 00:10:52.370 }' 00:10:52.370 2024/07/15 11:29:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:52.370 [2024-07-15 11:29:29.820991] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.370 [2024-07-15 11:29:29.821028] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.370 2024/07/15 11:29:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:52.370 [2024-07-15 11:29:29.833002] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.370 [2024-07-15 11:29:29.833046] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.370 2024/07/15 11:29:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:52.370 [2024-07-15 11:29:29.844997] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.370 [2024-07-15 11:29:29.845033] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.628 2024/07/15 11:29:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:52.628 [2024-07-15 11:29:29.857005] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.628 [2024-07-15 11:29:29.857045] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.628 2024/07/15 11:29:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:52.628 [2024-07-15 11:29:29.868988] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.628 [2024-07-15 11:29:29.869022] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.628 [2024-07-15 11:29:29.872311] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:10:52.628 [2024-07-15 11:29:29.872413] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76199 ] 00:10:52.628 2024/07/15 11:29:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:52.628 [2024-07-15 11:29:29.880987] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.628 [2024-07-15 11:29:29.881018] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.628 2024/07/15 11:29:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:52.628 [2024-07-15 11:29:29.893009] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.628 [2024-07-15 11:29:29.893046] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.628 2024/07/15 11:29:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:52.628 [2024-07-15 11:29:29.905007] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.628 [2024-07-15 11:29:29.905044] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.628 2024/07/15 11:29:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:52.628 [2024-07-15 11:29:29.917013] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.628 [2024-07-15 11:29:29.917051] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.628 2024/07/15 11:29:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:52.628 [2024-07-15 11:29:29.929018] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.628 [2024-07-15 11:29:29.929058] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.628 2024/07/15 11:29:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:52.628 [2024-07-15 11:29:29.941020] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.628 [2024-07-15 11:29:29.941056] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.628 2024/07/15 11:29:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:52.628 [2024-07-15 11:29:29.953020] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.628 [2024-07-15 11:29:29.953054] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.628 2024/07/15 11:29:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:52.628 [2024-07-15 11:29:29.965037] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.628 [2024-07-15 11:29:29.965072] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.628 2024/07/15 11:29:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:52.628 [2024-07-15 11:29:29.977025] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.629 [2024-07-15 11:29:29.977061] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.629 2024/07/15 11:29:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:52.629 [2024-07-15 11:29:29.989024] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.629 [2024-07-15 11:29:29.989059] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.629 2024/07/15 11:29:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:52.629 [2024-07-15 11:29:30.001065] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.629 [2024-07-15 11:29:30.001114] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.629 2024/07/15 11:29:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:52.629 [2024-07-15 11:29:30.013037] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.629 [2024-07-15 11:29:30.013073] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.629 [2024-07-15 11:29:30.016370] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:52.629 2024/07/15 11:29:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:52.629 [2024-07-15 11:29:30.025071] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.629 [2024-07-15 11:29:30.025116] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.629 2024/07/15 11:29:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:52.629 [2024-07-15 11:29:30.033038] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.629 [2024-07-15 11:29:30.033075] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.629 2024/07/15 11:29:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:52.629 [2024-07-15 11:29:30.041026] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.629 [2024-07-15 11:29:30.041058] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.629 2024/07/15 11:29:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:52.629 [2024-07-15 11:29:30.053055] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.629 [2024-07-15 11:29:30.053093] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.629 2024/07/15 11:29:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:52.629 [2024-07-15 11:29:30.061044] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.629 [2024-07-15 11:29:30.061078] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.629 2024/07/15 11:29:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:52.629 [2024-07-15 11:29:30.073078] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.629 [2024-07-15 11:29:30.073121] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.629 2024/07/15 11:29:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:52.629 [2024-07-15 11:29:30.085082] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.629 [2024-07-15 11:29:30.085119] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.629 2024/07/15 11:29:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:52.629 [2024-07-15 11:29:30.097086] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.629 [2024-07-15 11:29:30.097124] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.629 2024/07/15 11:29:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:52.887 [2024-07-15 11:29:30.109071] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.887 [2024-07-15 11:29:30.109104] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.887 2024/07/15 11:29:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:52.887 [2024-07-15 11:29:30.121083] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.887 [2024-07-15 11:29:30.121124] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.887 [2024-07-15 11:29:30.123075] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:52.887 2024/07/15 11:29:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:52.887 [2024-07-15 11:29:30.133075] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.887 [2024-07-15 11:29:30.133109] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.887 2024/07/15 11:29:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:52.887 [2024-07-15 11:29:30.145091] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.887 [2024-07-15 11:29:30.145131] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.887 2024/07/15 11:29:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:52.887 [2024-07-15 11:29:30.157106] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.887 [2024-07-15 11:29:30.157154] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.887 2024/07/15 11:29:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:52.887 [2024-07-15 11:29:30.169091] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.887 [2024-07-15 11:29:30.169131] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.887 2024/07/15 11:29:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:52.887 [2024-07-15 11:29:30.181105] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.887 [2024-07-15 11:29:30.181156] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.887 2024/07/15 11:29:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:52.887 [2024-07-15 11:29:30.193104] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.887 [2024-07-15 11:29:30.193148] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.887 2024/07/15 11:29:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:52.887 [2024-07-15 11:29:30.205087] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.887 [2024-07-15 11:29:30.205120] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.887 2024/07/15 11:29:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:52.887 [2024-07-15 11:29:30.217164] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.887 [2024-07-15 11:29:30.217213] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.887 2024/07/15 11:29:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:52.887 [2024-07-15 11:29:30.229162] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.887 [2024-07-15 11:29:30.229210] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.887 2024/07/15 11:29:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:52.888 [2024-07-15 11:29:30.241153] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.888 [2024-07-15 11:29:30.241190] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.888 2024/07/15 11:29:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:52.888 [2024-07-15 11:29:30.249143] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.888 [2024-07-15 11:29:30.249178] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.888 2024/07/15 11:29:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:52.888 [2024-07-15 11:29:30.261147] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.888 [2024-07-15 11:29:30.261183] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.888 2024/07/15 11:29:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:52.888 [2024-07-15 11:29:30.273265] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.888 [2024-07-15 11:29:30.273306] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.888 2024/07/15 11:29:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:52.888 Running I/O for 5 seconds... 00:10:52.888 [2024-07-15 11:29:30.285265] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.888 [2024-07-15 11:29:30.285302] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.888 2024/07/15 11:29:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:52.888 [2024-07-15 11:29:30.302045] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.888 [2024-07-15 11:29:30.302090] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.888 2024/07/15 11:29:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:52.888 [2024-07-15 11:29:30.319477] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.888 [2024-07-15 11:29:30.319521] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.888 2024/07/15 11:29:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:52.888 [2024-07-15 11:29:30.334987] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.888 [2024-07-15 11:29:30.335034] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.888 2024/07/15 11:29:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:52.888 [2024-07-15 11:29:30.351986] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.888 [2024-07-15 11:29:30.352042] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.888 2024/07/15 11:29:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:53.145 [2024-07-15 11:29:30.367882] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.145 [2024-07-15 11:29:30.367930] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.145 2024/07/15 11:29:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:53.145 [2024-07-15 11:29:30.385343] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.145 [2024-07-15 11:29:30.385388] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.145 2024/07/15 11:29:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:53.145 [2024-07-15 11:29:30.400994] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.145 [2024-07-15 11:29:30.401036] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.145 2024/07/15 11:29:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:53.145 [2024-07-15 11:29:30.411523] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.145 [2024-07-15 11:29:30.411581] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.145 2024/07/15 11:29:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:53.145 [2024-07-15 11:29:30.426941] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.145 [2024-07-15 11:29:30.426990] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.146 2024/07/15 11:29:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:53.146 [2024-07-15 11:29:30.443051] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.146 [2024-07-15 11:29:30.443098] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.146 2024/07/15 11:29:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:53.146 [2024-07-15 11:29:30.461348] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.146 [2024-07-15 11:29:30.461397] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.146 2024/07/15 11:29:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:53.146 [2024-07-15 11:29:30.476962] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.146 [2024-07-15 11:29:30.477010] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.146 2024/07/15 11:29:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:53.146 [2024-07-15 11:29:30.494122] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.146 [2024-07-15 11:29:30.494183] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.146 2024/07/15 11:29:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:53.146 [2024-07-15 11:29:30.509914] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.146 [2024-07-15 11:29:30.509964] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.146 2024/07/15 11:29:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:53.146 [2024-07-15 11:29:30.527914] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.146 [2024-07-15 11:29:30.527967] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.146 2024/07/15 11:29:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:53.146 [2024-07-15 11:29:30.543584] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.146 [2024-07-15 11:29:30.543630] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.146 2024/07/15 11:29:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:53.146 [2024-07-15 11:29:30.560565] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.146 [2024-07-15 11:29:30.560610] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.146 2024/07/15 11:29:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:53.146 [2024-07-15 11:29:30.578115] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.146 [2024-07-15 11:29:30.578166] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.146 2024/07/15 11:29:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:53.146 [2024-07-15 11:29:30.593624] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.146 [2024-07-15 11:29:30.593668] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.146 2024/07/15 11:29:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:53.146 [2024-07-15 11:29:30.610461] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.146 [2024-07-15 11:29:30.610507] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.146 2024/07/15 11:29:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:53.403 [2024-07-15 11:29:30.626504] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.403 [2024-07-15 11:29:30.626567] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.403 2024/07/15 11:29:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:53.403 [2024-07-15 11:29:30.637026] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.403 [2024-07-15 11:29:30.637073] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.403 2024/07/15 11:29:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:53.403 [2024-07-15 11:29:30.649404] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.403 [2024-07-15 11:29:30.649450] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.403 2024/07/15 11:29:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:53.403 [2024-07-15 11:29:30.664797] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.403 [2024-07-15 11:29:30.664848] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.403 2024/07/15 11:29:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:53.403 [2024-07-15 11:29:30.681744] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.403 [2024-07-15 11:29:30.681795] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.403 2024/07/15 11:29:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:53.403 [2024-07-15 11:29:30.698472] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.403 [2024-07-15 11:29:30.698519] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.403 2024/07/15 11:29:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:53.403 [2024-07-15 11:29:30.708850] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.403 [2024-07-15 11:29:30.708891] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.403 2024/07/15 11:29:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:53.403 [2024-07-15 11:29:30.723982] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.404 [2024-07-15 11:29:30.724029] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.404 2024/07/15 11:29:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:53.404 [2024-07-15 11:29:30.734494] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.404 [2024-07-15 11:29:30.734537] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.404 2024/07/15 11:29:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:53.404 [2024-07-15 11:29:30.750237] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.404 [2024-07-15 11:29:30.750282] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.404 2024/07/15 11:29:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:53.404 [2024-07-15 11:29:30.767422] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.404 [2024-07-15 11:29:30.767467] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.404 2024/07/15 11:29:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:53.404 [2024-07-15 11:29:30.783396] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.404 [2024-07-15 11:29:30.783445] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.404 2024/07/15 11:29:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:53.404 [2024-07-15 11:29:30.800265] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.404 [2024-07-15 11:29:30.800328] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.404 2024/07/15 11:29:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:53.404 [2024-07-15 11:29:30.815867] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.404 [2024-07-15 11:29:30.815914] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.404 2024/07/15 11:29:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:53.404 [2024-07-15 11:29:30.826461] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.404 [2024-07-15 11:29:30.826514] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.404 2024/07/15 11:29:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:53.404 [2024-07-15 11:29:30.841976] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.404 [2024-07-15 11:29:30.842023] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.404 2024/07/15 11:29:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:53.404 [2024-07-15 11:29:30.857710] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.404 [2024-07-15 11:29:30.857756] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.404 2024/07/15 11:29:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:53.404 [2024-07-15 11:29:30.874741] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.404 [2024-07-15 11:29:30.874790] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.404 2024/07/15 11:29:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:53.661 [2024-07-15 11:29:30.891013] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.661 [2024-07-15 11:29:30.891068] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.661 2024/07/15 11:29:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:53.661 [2024-07-15 11:29:30.908889] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.661 [2024-07-15 11:29:30.908951] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.661 2024/07/15 11:29:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:53.661 [2024-07-15 11:29:30.924408] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.661 [2024-07-15 11:29:30.924457] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.661 2024/07/15 11:29:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:53.661 [2024-07-15 11:29:30.934778] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.661 [2024-07-15 11:29:30.934821] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.661 2024/07/15 11:29:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:53.661 [2024-07-15 11:29:30.949439] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.661 [2024-07-15 11:29:30.949487] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.661 2024/07/15 11:29:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:53.661 [2024-07-15 11:29:30.965171] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.661 [2024-07-15 11:29:30.965224] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.661 2024/07/15 11:29:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:53.661 [2024-07-15 11:29:30.980680] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.661 [2024-07-15 11:29:30.980731] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.661 2024/07/15 11:29:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:53.661 [2024-07-15 11:29:30.997002] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.661 [2024-07-15 11:29:30.997054] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.661 2024/07/15 11:29:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:53.661 [2024-07-15 11:29:31.013033] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.661 [2024-07-15 11:29:31.013088] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.661 2024/07/15 11:29:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:53.661 [2024-07-15 11:29:31.028916] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.661 [2024-07-15 11:29:31.028979] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.661 2024/07/15 11:29:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:53.661 [2024-07-15 11:29:31.044739] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.661 [2024-07-15 11:29:31.044789] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.661 2024/07/15 11:29:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:53.661 [2024-07-15 11:29:31.060338] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.661 [2024-07-15 11:29:31.060399] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.661 2024/07/15 11:29:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:53.661 [2024-07-15 11:29:31.071050] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.661 [2024-07-15 11:29:31.071104] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.661 2024/07/15 11:29:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:53.661 [2024-07-15 11:29:31.086265] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.661 [2024-07-15 11:29:31.086313] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.661 2024/07/15 11:29:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:53.661 [2024-07-15 11:29:31.103426] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.661 [2024-07-15 11:29:31.103474] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.661 2024/07/15 11:29:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:53.661 [2024-07-15 11:29:31.119279] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.661 [2024-07-15 11:29:31.119331] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.662 2024/07/15 11:29:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:53.662 [2024-07-15 11:29:31.136420] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.662 [2024-07-15 11:29:31.136474] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.932 2024/07/15 11:29:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:53.932 [2024-07-15 11:29:31.153057] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.932 [2024-07-15 11:29:31.153116] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.932 2024/07/15 11:29:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:53.932 [2024-07-15 11:29:31.168720] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.932 [2024-07-15 11:29:31.168770] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.932 2024/07/15 11:29:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:53.932 [2024-07-15 11:29:31.179478] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.932 [2024-07-15 11:29:31.179529] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.932 2024/07/15 11:29:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:53.932 [2024-07-15 11:29:31.195318] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.932 [2024-07-15 11:29:31.195392] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.932 2024/07/15 11:29:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:53.932 [2024-07-15 11:29:31.211542] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.932 [2024-07-15 11:29:31.211610] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.932 2024/07/15 11:29:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:53.932 [2024-07-15 11:29:31.228271] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.932 [2024-07-15 11:29:31.228335] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.932 2024/07/15 11:29:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:53.932 [2024-07-15 11:29:31.245607] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.932 [2024-07-15 11:29:31.245657] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.932 2024/07/15 11:29:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:53.932 [2024-07-15 11:29:31.256036] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.932 [2024-07-15 11:29:31.256081] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.932 2024/07/15 11:29:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:53.932 [2024-07-15 11:29:31.266992] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.932 [2024-07-15 11:29:31.267044] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.932 2024/07/15 11:29:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:53.932 [2024-07-15 11:29:31.278215] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.932 [2024-07-15 11:29:31.278262] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.932 2024/07/15 11:29:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:53.932 [2024-07-15 11:29:31.290872] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.932 [2024-07-15 11:29:31.290916] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.932 2024/07/15 11:29:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:53.932 [2024-07-15 11:29:31.301000] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.932 [2024-07-15 11:29:31.301044] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.932 2024/07/15 11:29:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:53.932 [2024-07-15 11:29:31.312855] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.932 [2024-07-15 11:29:31.312899] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.932 2024/07/15 11:29:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:53.932 [2024-07-15 11:29:31.326216] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.932 [2024-07-15 11:29:31.326260] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.932 2024/07/15 11:29:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:53.932 [2024-07-15 11:29:31.342124] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.932 [2024-07-15 11:29:31.342167] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.932 2024/07/15 11:29:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:53.932 [2024-07-15 11:29:31.353044] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.932 [2024-07-15 11:29:31.353090] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.932 2024/07/15 11:29:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:53.932 [2024-07-15 11:29:31.364188] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.932 [2024-07-15 11:29:31.364230] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.932 2024/07/15 11:29:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:53.932 [2024-07-15 11:29:31.380272] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.932 [2024-07-15 11:29:31.380324] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.932 2024/07/15 11:29:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:53.932 [2024-07-15 11:29:31.395856] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.932 [2024-07-15 11:29:31.395900] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.932 2024/07/15 11:29:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:54.214 [2024-07-15 11:29:31.406642] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.214 [2024-07-15 11:29:31.406695] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.214 2024/07/15 11:29:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:54.214 [2024-07-15 11:29:31.422369] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.214 [2024-07-15 11:29:31.422437] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.214 2024/07/15 11:29:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:54.214 [2024-07-15 11:29:31.438922] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.214 [2024-07-15 11:29:31.438992] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.214 2024/07/15 11:29:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:54.214 [2024-07-15 11:29:31.455335] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.214 [2024-07-15 11:29:31.455386] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.214 2024/07/15 11:29:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:54.214 [2024-07-15 11:29:31.472240] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.214 [2024-07-15 11:29:31.472290] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.214 2024/07/15 11:29:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:54.214 [2024-07-15 11:29:31.488640] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.214 [2024-07-15 11:29:31.488690] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.214 2024/07/15 11:29:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:54.214 [2024-07-15 11:29:31.505781] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.214 [2024-07-15 11:29:31.505824] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.214 2024/07/15 11:29:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:54.214 [2024-07-15 11:29:31.521403] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.214 [2024-07-15 11:29:31.521448] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.215 2024/07/15 11:29:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:54.215 [2024-07-15 11:29:31.537027] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.215 [2024-07-15 11:29:31.537077] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.215 2024/07/15 11:29:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:54.215 [2024-07-15 11:29:31.554184] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.215 [2024-07-15 11:29:31.554251] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.215 2024/07/15 11:29:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:54.215 [2024-07-15 11:29:31.570388] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.215 [2024-07-15 11:29:31.570442] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.215 2024/07/15 11:29:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:54.215 [2024-07-15 11:29:31.588032] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.215 [2024-07-15 11:29:31.588087] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.215 2024/07/15 11:29:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:54.215 [2024-07-15 11:29:31.606582] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.215 [2024-07-15 11:29:31.606651] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.215 2024/07/15 11:29:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:54.215 [2024-07-15 11:29:31.623502] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.215 [2024-07-15 11:29:31.623569] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.215 2024/07/15 11:29:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:54.215 [2024-07-15 11:29:31.634170] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.215 [2024-07-15 11:29:31.634214] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.215 2024/07/15 11:29:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:54.215 [2024-07-15 11:29:31.645078] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.215 [2024-07-15 11:29:31.645125] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.215 2024/07/15 11:29:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:54.215 [2024-07-15 11:29:31.657832] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.215 [2024-07-15 11:29:31.657892] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.215 2024/07/15 11:29:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:54.215 [2024-07-15 11:29:31.668530] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.215 [2024-07-15 11:29:31.668585] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.215 2024/07/15 11:29:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:54.215 [2024-07-15 11:29:31.679414] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.215 [2024-07-15 11:29:31.679452] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.215 2024/07/15 11:29:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:54.486 [2024-07-15 11:29:31.696405] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.486 [2024-07-15 11:29:31.696453] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.486 2024/07/15 11:29:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:54.486 [2024-07-15 11:29:31.715274] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.486 [2024-07-15 11:29:31.715328] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.486 2024/07/15 11:29:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:54.486 [2024-07-15 11:29:31.731383] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.486 [2024-07-15 11:29:31.731438] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.486 2024/07/15 11:29:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:54.486 [2024-07-15 11:29:31.748132] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.486 [2024-07-15 11:29:31.748190] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.486 2024/07/15 11:29:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:54.486 [2024-07-15 11:29:31.764462] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.486 [2024-07-15 11:29:31.764525] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.486 2024/07/15 11:29:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:54.486 [2024-07-15 11:29:31.781628] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.486 [2024-07-15 11:29:31.781713] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.486 2024/07/15 11:29:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:54.486 [2024-07-15 11:29:31.798366] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.486 [2024-07-15 11:29:31.798426] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.486 2024/07/15 11:29:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:54.486 [2024-07-15 11:29:31.814482] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.486 [2024-07-15 11:29:31.814559] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.486 2024/07/15 11:29:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:54.486 [2024-07-15 11:29:31.831732] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.486 [2024-07-15 11:29:31.831786] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.486 2024/07/15 11:29:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:54.486 [2024-07-15 11:29:31.847186] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.486 [2024-07-15 11:29:31.847234] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.486 2024/07/15 11:29:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:54.486 [2024-07-15 11:29:31.857377] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.486 [2024-07-15 11:29:31.857423] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.486 2024/07/15 11:29:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:54.486 [2024-07-15 11:29:31.871935] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.486 [2024-07-15 11:29:31.871988] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.486 2024/07/15 11:29:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:54.486 [2024-07-15 11:29:31.890604] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.486 [2024-07-15 11:29:31.890662] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.486 2024/07/15 11:29:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:54.486 [2024-07-15 11:29:31.905747] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.486 [2024-07-15 11:29:31.905805] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.486 2024/07/15 11:29:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:54.486 [2024-07-15 11:29:31.916328] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.486 [2024-07-15 11:29:31.916375] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.486 2024/07/15 11:29:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:54.486 [2024-07-15 11:29:31.930976] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.486 [2024-07-15 11:29:31.931041] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.486 2024/07/15 11:29:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:54.486 [2024-07-15 11:29:31.941755] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.486 [2024-07-15 11:29:31.941803] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.486 2024/07/15 11:29:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:54.486 [2024-07-15 11:29:31.956658] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.486 [2024-07-15 11:29:31.956705] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.486 2024/07/15 11:29:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:54.743 [2024-07-15 11:29:31.972427] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.743 [2024-07-15 11:29:31.972480] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.743 2024/07/15 11:29:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:54.743 [2024-07-15 11:29:31.989512] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.743 [2024-07-15 11:29:31.989577] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.743 2024/07/15 11:29:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:54.743 [2024-07-15 11:29:32.006586] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.743 [2024-07-15 11:29:32.006636] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.743 2024/07/15 11:29:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:54.743 [2024-07-15 11:29:32.022256] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.743 [2024-07-15 11:29:32.022319] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.743 2024/07/15 11:29:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:54.743 [2024-07-15 11:29:32.039486] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.743 [2024-07-15 11:29:32.039541] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.743 2024/07/15 11:29:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:54.743 [2024-07-15 11:29:32.055076] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.743 [2024-07-15 11:29:32.055127] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.743 2024/07/15 11:29:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:54.743 [2024-07-15 11:29:32.072159] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.743 [2024-07-15 11:29:32.072214] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.743 2024/07/15 11:29:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:54.743 [2024-07-15 11:29:32.088088] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.743 [2024-07-15 11:29:32.088137] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.743 2024/07/15 11:29:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:54.743 [2024-07-15 11:29:32.105232] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.743 [2024-07-15 11:29:32.105297] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.743 2024/07/15 11:29:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:54.743 [2024-07-15 11:29:32.121240] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.743 [2024-07-15 11:29:32.121300] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.743 2024/07/15 11:29:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:54.743 [2024-07-15 11:29:32.138790] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.743 [2024-07-15 11:29:32.138849] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.743 2024/07/15 11:29:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:54.743 [2024-07-15 11:29:32.153897] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.743 [2024-07-15 11:29:32.153941] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.743 2024/07/15 11:29:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:54.743 [2024-07-15 11:29:32.170773] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.743 [2024-07-15 11:29:32.170822] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.743 2024/07/15 11:29:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:54.743 [2024-07-15 11:29:32.186991] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.743 [2024-07-15 11:29:32.187043] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.743 2024/07/15 11:29:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:54.743 [2024-07-15 11:29:32.204025] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.743 [2024-07-15 11:29:32.204074] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.743 2024/07/15 11:29:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:55.000 [2024-07-15 11:29:32.219535] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.000 [2024-07-15 11:29:32.219599] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.000 2024/07/15 11:29:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:55.000 [2024-07-15 11:29:32.230540] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.000 [2024-07-15 11:29:32.230614] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.000 2024/07/15 11:29:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:55.000 [2024-07-15 11:29:32.247696] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.000 [2024-07-15 11:29:32.247778] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.000 2024/07/15 11:29:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:55.000 [2024-07-15 11:29:32.261828] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.000 [2024-07-15 11:29:32.261903] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.000 2024/07/15 11:29:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:55.000 [2024-07-15 11:29:32.278880] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.000 [2024-07-15 11:29:32.278950] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.000 2024/07/15 11:29:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:55.000 [2024-07-15 11:29:32.293434] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.000 [2024-07-15 11:29:32.293495] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.000 2024/07/15 11:29:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:55.000 [2024-07-15 11:29:32.306433] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.000 [2024-07-15 11:29:32.306483] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.000 2024/07/15 11:29:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:55.000 [2024-07-15 11:29:32.321559] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.001 [2024-07-15 11:29:32.321618] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.001 2024/07/15 11:29:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:55.001 [2024-07-15 11:29:32.339452] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.001 [2024-07-15 11:29:32.339509] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.001 2024/07/15 11:29:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:55.001 [2024-07-15 11:29:32.354543] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.001 [2024-07-15 11:29:32.354615] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.001 2024/07/15 11:29:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:55.001 [2024-07-15 11:29:32.364277] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.001 [2024-07-15 11:29:32.364324] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.001 2024/07/15 11:29:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:55.001 [2024-07-15 11:29:32.375766] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.001 [2024-07-15 11:29:32.375810] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.001 2024/07/15 11:29:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:55.001 [2024-07-15 11:29:32.388758] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.001 [2024-07-15 11:29:32.388804] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.001 2024/07/15 11:29:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:55.001 [2024-07-15 11:29:32.404436] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.001 [2024-07-15 11:29:32.404486] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.001 2024/07/15 11:29:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:55.001 [2024-07-15 11:29:32.421366] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.001 [2024-07-15 11:29:32.421417] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.001 2024/07/15 11:29:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:55.001 [2024-07-15 11:29:32.436971] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.001 [2024-07-15 11:29:32.437024] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.001 2024/07/15 11:29:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:55.001 [2024-07-15 11:29:32.447498] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.001 [2024-07-15 11:29:32.447561] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.001 2024/07/15 11:29:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:55.001 [2024-07-15 11:29:32.462661] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.001 [2024-07-15 11:29:32.462709] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.001 2024/07/15 11:29:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:55.258 [2024-07-15 11:29:32.477962] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.258 [2024-07-15 11:29:32.478009] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.258 2024/07/15 11:29:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:55.258 [2024-07-15 11:29:32.488366] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.258 [2024-07-15 11:29:32.488410] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.259 2024/07/15 11:29:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:55.259 [2024-07-15 11:29:32.503523] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.259 [2024-07-15 11:29:32.503592] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.259 2024/07/15 11:29:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:55.259 [2024-07-15 11:29:32.514965] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.259 [2024-07-15 11:29:32.515019] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.259 2024/07/15 11:29:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:55.259 [2024-07-15 11:29:32.529681] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.259 [2024-07-15 11:29:32.529729] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.259 2024/07/15 11:29:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:55.259 [2024-07-15 11:29:32.540657] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.259 [2024-07-15 11:29:32.540721] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.259 2024/07/15 11:29:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:55.259 [2024-07-15 11:29:32.555510] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.259 [2024-07-15 11:29:32.555574] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.259 2024/07/15 11:29:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:55.259 [2024-07-15 11:29:32.571824] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.259 [2024-07-15 11:29:32.571877] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.259 2024/07/15 11:29:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:55.259 [2024-07-15 11:29:32.588250] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.259 [2024-07-15 11:29:32.588316] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.259 2024/07/15 11:29:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:55.259 [2024-07-15 11:29:32.599847] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.259 [2024-07-15 11:29:32.599913] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.259 2024/07/15 11:29:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:55.259 [2024-07-15 11:29:32.616891] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.259 [2024-07-15 11:29:32.616956] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.259 2024/07/15 11:29:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:55.259 [2024-07-15 11:29:32.634014] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.259 [2024-07-15 11:29:32.634082] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.259 2024/07/15 11:29:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:55.259 [2024-07-15 11:29:32.645093] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.259 [2024-07-15 11:29:32.645160] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.259 2024/07/15 11:29:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:55.259 [2024-07-15 11:29:32.659856] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.259 [2024-07-15 11:29:32.659899] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.259 2024/07/15 11:29:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:55.259 [2024-07-15 11:29:32.670236] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.259 [2024-07-15 11:29:32.670280] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.259 2024/07/15 11:29:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:55.259 [2024-07-15 11:29:32.684952] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.259 [2024-07-15 11:29:32.684998] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.259 2024/07/15 11:29:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:55.259 [2024-07-15 11:29:32.701324] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.259 [2024-07-15 11:29:32.701375] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.259 2024/07/15 11:29:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:55.259 [2024-07-15 11:29:32.718430] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.259 [2024-07-15 11:29:32.718478] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.259 2024/07/15 11:29:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:55.259 [2024-07-15 11:29:32.733962] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.259 [2024-07-15 11:29:32.734007] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.517 2024/07/15 11:29:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:55.517 [2024-07-15 11:29:32.750913] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.517 [2024-07-15 11:29:32.750973] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.517 2024/07/15 11:29:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:55.517 [2024-07-15 11:29:32.765708] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.517 [2024-07-15 11:29:32.765776] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.517 2024/07/15 11:29:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:55.517 [2024-07-15 11:29:32.782675] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.517 [2024-07-15 11:29:32.782727] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.517 2024/07/15 11:29:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:55.517 [2024-07-15 11:29:32.799226] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.517 [2024-07-15 11:29:32.799304] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.517 2024/07/15 11:29:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:55.517 [2024-07-15 11:29:32.815209] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.517 [2024-07-15 11:29:32.815265] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.517 2024/07/15 11:29:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:55.517 [2024-07-15 11:29:32.825601] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.517 [2024-07-15 11:29:32.825649] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.517 2024/07/15 11:29:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:55.517 [2024-07-15 11:29:32.836853] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.517 [2024-07-15 11:29:32.836899] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.517 2024/07/15 11:29:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:55.517 [2024-07-15 11:29:32.852194] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.517 [2024-07-15 11:29:32.852240] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.517 2024/07/15 11:29:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:55.517 [2024-07-15 11:29:32.862458] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.517 [2024-07-15 11:29:32.862500] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.517 2024/07/15 11:29:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:55.517 [2024-07-15 11:29:32.877227] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.517 [2024-07-15 11:29:32.877277] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.517 2024/07/15 11:29:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:55.517 [2024-07-15 11:29:32.894182] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.517 [2024-07-15 11:29:32.894243] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.517 2024/07/15 11:29:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:55.517 [2024-07-15 11:29:32.904703] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.517 [2024-07-15 11:29:32.904750] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.517 2024/07/15 11:29:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:55.517 [2024-07-15 11:29:32.915924] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.517 [2024-07-15 11:29:32.915972] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.517 2024/07/15 11:29:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:55.517 [2024-07-15 11:29:32.927014] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.517 [2024-07-15 11:29:32.927071] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.517 2024/07/15 11:29:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:55.517 [2024-07-15 11:29:32.942202] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.517 [2024-07-15 11:29:32.942241] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.517 2024/07/15 11:29:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:55.517 [2024-07-15 11:29:32.958861] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.517 [2024-07-15 11:29:32.958903] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.517 2024/07/15 11:29:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:55.517 [2024-07-15 11:29:32.976059] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.517 [2024-07-15 11:29:32.976094] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.517 2024/07/15 11:29:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:55.517 [2024-07-15 11:29:32.991400] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.517 [2024-07-15 11:29:32.991436] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.517 2024/07/15 11:29:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:55.775 [2024-07-15 11:29:33.001935] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.776 [2024-07-15 11:29:33.001977] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.776 2024/07/15 11:29:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:55.776 [2024-07-15 11:29:33.017380] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.776 [2024-07-15 11:29:33.017418] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.776 2024/07/15 11:29:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:55.776 [2024-07-15 11:29:33.032713] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.776 [2024-07-15 11:29:33.032747] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.776 2024/07/15 11:29:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:55.776 [2024-07-15 11:29:33.048188] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.776 [2024-07-15 11:29:33.048223] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.776 2024/07/15 11:29:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:55.776 [2024-07-15 11:29:33.058870] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.776 [2024-07-15 11:29:33.058903] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.776 2024/07/15 11:29:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:55.776 [2024-07-15 11:29:33.073787] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.776 [2024-07-15 11:29:33.073821] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.776 2024/07/15 11:29:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:55.776 [2024-07-15 11:29:33.090329] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.776 [2024-07-15 11:29:33.090369] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.776 2024/07/15 11:29:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:55.776 [2024-07-15 11:29:33.107425] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.776 [2024-07-15 11:29:33.107474] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.776 2024/07/15 11:29:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:55.776 [2024-07-15 11:29:33.123197] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.776 [2024-07-15 11:29:33.123244] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.776 2024/07/15 11:29:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:55.776 [2024-07-15 11:29:33.133696] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.776 [2024-07-15 11:29:33.133734] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.776 2024/07/15 11:29:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:55.776 [2024-07-15 11:29:33.148710] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.776 [2024-07-15 11:29:33.148761] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.776 2024/07/15 11:29:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:55.776 [2024-07-15 11:29:33.164408] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.776 [2024-07-15 11:29:33.164443] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.776 2024/07/15 11:29:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:55.776 [2024-07-15 11:29:33.181657] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.776 [2024-07-15 11:29:33.181691] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.776 2024/07/15 11:29:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:55.776 [2024-07-15 11:29:33.197245] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.776 [2024-07-15 11:29:33.197279] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.776 2024/07/15 11:29:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:55.776 [2024-07-15 11:29:33.207600] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.776 [2024-07-15 11:29:33.207635] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.776 2024/07/15 11:29:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:55.776 [2024-07-15 11:29:33.222540] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.776 [2024-07-15 11:29:33.222586] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.776 2024/07/15 11:29:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:55.776 [2024-07-15 11:29:33.232574] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.776 [2024-07-15 11:29:33.232607] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.776 2024/07/15 11:29:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:55.776 [2024-07-15 11:29:33.247836] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.776 [2024-07-15 11:29:33.247897] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.776 2024/07/15 11:29:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:56.034 [2024-07-15 11:29:33.258786] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.034 [2024-07-15 11:29:33.258830] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.034 2024/07/15 11:29:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:56.034 [2024-07-15 11:29:33.273367] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.034 [2024-07-15 11:29:33.273404] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.034 2024/07/15 11:29:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:56.034 [2024-07-15 11:29:33.283882] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.034 [2024-07-15 11:29:33.283916] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.034 2024/07/15 11:29:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:56.034 [2024-07-15 11:29:33.298505] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.034 [2024-07-15 11:29:33.298539] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.034 2024/07/15 11:29:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:56.034 [2024-07-15 11:29:33.314265] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.034 [2024-07-15 11:29:33.314298] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.034 2024/07/15 11:29:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:56.034 [2024-07-15 11:29:33.329628] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.034 [2024-07-15 11:29:33.329663] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.034 2024/07/15 11:29:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:56.034 [2024-07-15 11:29:33.340233] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.034 [2024-07-15 11:29:33.340268] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.034 2024/07/15 11:29:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:56.034 [2024-07-15 11:29:33.350923] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.034 [2024-07-15 11:29:33.350959] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.034 2024/07/15 11:29:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:56.034 [2024-07-15 11:29:33.363345] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.034 [2024-07-15 11:29:33.363383] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.034 2024/07/15 11:29:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:56.034 [2024-07-15 11:29:33.378948] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.034 [2024-07-15 11:29:33.378983] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.034 2024/07/15 11:29:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:56.034 [2024-07-15 11:29:33.389485] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.034 [2024-07-15 11:29:33.389518] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.034 2024/07/15 11:29:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:56.034 [2024-07-15 11:29:33.403538] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.034 [2024-07-15 11:29:33.403583] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.034 2024/07/15 11:29:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:56.034 [2024-07-15 11:29:33.419353] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.034 [2024-07-15 11:29:33.419388] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.034 2024/07/15 11:29:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:56.034 [2024-07-15 11:29:33.428698] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.034 [2024-07-15 11:29:33.428730] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.034 2024/07/15 11:29:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:56.034 [2024-07-15 11:29:33.444167] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.034 [2024-07-15 11:29:33.444202] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.034 2024/07/15 11:29:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:56.034 [2024-07-15 11:29:33.459758] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.034 [2024-07-15 11:29:33.459792] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.034 2024/07/15 11:29:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:56.034 [2024-07-15 11:29:33.469946] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.034 [2024-07-15 11:29:33.469979] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.034 2024/07/15 11:29:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:56.034 [2024-07-15 11:29:33.483982] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.034 [2024-07-15 11:29:33.484016] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.034 2024/07/15 11:29:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:56.034 [2024-07-15 11:29:33.499101] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.034 [2024-07-15 11:29:33.499138] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.034 2024/07/15 11:29:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:56.292 [2024-07-15 11:29:33.514465] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.292 [2024-07-15 11:29:33.514507] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.292 2024/07/15 11:29:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:56.292 [2024-07-15 11:29:33.530234] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.292 [2024-07-15 11:29:33.530280] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.292 2024/07/15 11:29:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:56.292 [2024-07-15 11:29:33.540594] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.292 [2024-07-15 11:29:33.540630] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.292 2024/07/15 11:29:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:56.292 [2024-07-15 11:29:33.555957] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.292 [2024-07-15 11:29:33.555992] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.292 2024/07/15 11:29:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:56.292 [2024-07-15 11:29:33.571216] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.292 [2024-07-15 11:29:33.571263] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.292 2024/07/15 11:29:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:56.292 [2024-07-15 11:29:33.586964] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.292 [2024-07-15 11:29:33.587002] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.292 2024/07/15 11:29:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:56.292 [2024-07-15 11:29:33.602417] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.292 [2024-07-15 11:29:33.602452] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.292 2024/07/15 11:29:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:56.292 [2024-07-15 11:29:33.619375] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.292 [2024-07-15 11:29:33.619418] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.292 2024/07/15 11:29:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:56.292 [2024-07-15 11:29:33.640988] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.292 [2024-07-15 11:29:33.641044] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.292 2024/07/15 11:29:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:56.292 [2024-07-15 11:29:33.657169] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.292 [2024-07-15 11:29:33.657232] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.292 2024/07/15 11:29:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:56.292 [2024-07-15 11:29:33.674535] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.292 [2024-07-15 11:29:33.674593] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.292 2024/07/15 11:29:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:56.292 [2024-07-15 11:29:33.689393] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.292 [2024-07-15 11:29:33.689429] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.292 2024/07/15 11:29:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:56.292 [2024-07-15 11:29:33.705068] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.292 [2024-07-15 11:29:33.705103] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.292 2024/07/15 11:29:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:56.292 [2024-07-15 11:29:33.715056] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.292 [2024-07-15 11:29:33.715090] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.292 2024/07/15 11:29:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:56.292 [2024-07-15 11:29:33.731412] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.292 [2024-07-15 11:29:33.731447] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.292 2024/07/15 11:29:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:56.292 [2024-07-15 11:29:33.748214] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.292 [2024-07-15 11:29:33.748250] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.292 2024/07/15 11:29:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:56.292 [2024-07-15 11:29:33.764227] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.292 [2024-07-15 11:29:33.764262] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.550 2024/07/15 11:29:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:56.550 [2024-07-15 11:29:33.780998] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.550 [2024-07-15 11:29:33.781035] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.550 2024/07/15 11:29:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:56.550 [2024-07-15 11:29:33.796745] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.550 [2024-07-15 11:29:33.796780] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.550 2024/07/15 11:29:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:56.550 [2024-07-15 11:29:33.806521] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.550 [2024-07-15 11:29:33.806566] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.550 2024/07/15 11:29:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:56.550 [2024-07-15 11:29:33.822337] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.550 [2024-07-15 11:29:33.822373] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.550 2024/07/15 11:29:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:56.550 [2024-07-15 11:29:33.839249] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.550 [2024-07-15 11:29:33.839296] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.551 2024/07/15 11:29:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:56.551 [2024-07-15 11:29:33.855414] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.551 [2024-07-15 11:29:33.855450] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.551 2024/07/15 11:29:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:56.551 [2024-07-15 11:29:33.872285] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.551 [2024-07-15 11:29:33.872325] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.551 2024/07/15 11:29:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:56.551 [2024-07-15 11:29:33.888681] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.551 [2024-07-15 11:29:33.888716] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.551 2024/07/15 11:29:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:56.551 [2024-07-15 11:29:33.907557] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.551 [2024-07-15 11:29:33.907591] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.551 2024/07/15 11:29:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:56.551 [2024-07-15 11:29:33.922890] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.551 [2024-07-15 11:29:33.922930] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.551 2024/07/15 11:29:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:56.551 [2024-07-15 11:29:33.938961] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.551 [2024-07-15 11:29:33.938997] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.551 2024/07/15 11:29:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:56.551 [2024-07-15 11:29:33.956324] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.551 [2024-07-15 11:29:33.956360] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.551 2024/07/15 11:29:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:56.551 [2024-07-15 11:29:33.971161] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.551 [2024-07-15 11:29:33.971195] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.551 2024/07/15 11:29:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:56.551 [2024-07-15 11:29:33.986504] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.551 [2024-07-15 11:29:33.986537] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.551 2024/07/15 11:29:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:56.551 [2024-07-15 11:29:33.997084] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.551 [2024-07-15 11:29:33.997136] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.551 2024/07/15 11:29:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:56.551 [2024-07-15 11:29:34.012116] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.551 [2024-07-15 11:29:34.012168] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.551 2024/07/15 11:29:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:56.810 [2024-07-15 11:29:34.028777] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.810 [2024-07-15 11:29:34.028816] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.810 2024/07/15 11:29:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:56.810 [2024-07-15 11:29:34.044622] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.810 [2024-07-15 11:29:34.044656] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.810 2024/07/15 11:29:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:56.810 [2024-07-15 11:29:34.057690] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.810 [2024-07-15 11:29:34.057732] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.810 2024/07/15 11:29:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:56.810 [2024-07-15 11:29:34.067164] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.810 [2024-07-15 11:29:34.067196] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.810 2024/07/15 11:29:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:56.810 [2024-07-15 11:29:34.082711] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.810 [2024-07-15 11:29:34.082746] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.810 2024/07/15 11:29:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:56.810 [2024-07-15 11:29:34.099277] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.810 [2024-07-15 11:29:34.099319] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.810 2024/07/15 11:29:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:56.810 [2024-07-15 11:29:34.110879] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.810 [2024-07-15 11:29:34.110915] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.810 2024/07/15 11:29:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:56.810 [2024-07-15 11:29:34.123086] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.810 [2024-07-15 11:29:34.123123] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.810 2024/07/15 11:29:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:56.810 [2024-07-15 11:29:34.140029] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.810 [2024-07-15 11:29:34.140064] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.810 2024/07/15 11:29:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:56.810 [2024-07-15 11:29:34.155399] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.810 [2024-07-15 11:29:34.155436] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.811 2024/07/15 11:29:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:56.811 [2024-07-15 11:29:34.172019] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.811 [2024-07-15 11:29:34.172080] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.811 2024/07/15 11:29:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:56.811 [2024-07-15 11:29:34.187997] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.811 [2024-07-15 11:29:34.188042] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.811 2024/07/15 11:29:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:56.811 [2024-07-15 11:29:34.204468] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.811 [2024-07-15 11:29:34.204502] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.811 2024/07/15 11:29:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:56.811 [2024-07-15 11:29:34.221807] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.811 [2024-07-15 11:29:34.221843] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.811 2024/07/15 11:29:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:56.811 [2024-07-15 11:29:34.237826] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.811 [2024-07-15 11:29:34.237861] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.811 2024/07/15 11:29:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:56.811 [2024-07-15 11:29:34.247226] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.811 [2024-07-15 11:29:34.247258] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.811 2024/07/15 11:29:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:56.811 [2024-07-15 11:29:34.262936] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.811 [2024-07-15 11:29:34.262973] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.811 2024/07/15 11:29:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:56.811 [2024-07-15 11:29:34.279674] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.811 [2024-07-15 11:29:34.279706] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.811 2024/07/15 11:29:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:57.070 [2024-07-15 11:29:34.295536] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.070 [2024-07-15 11:29:34.295585] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.070 2024/07/15 11:29:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:57.070 [2024-07-15 11:29:34.312228] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.070 [2024-07-15 11:29:34.312262] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.070 2024/07/15 11:29:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:57.070 [2024-07-15 11:29:34.329132] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.070 [2024-07-15 11:29:34.329167] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.070 2024/07/15 11:29:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:57.070 [2024-07-15 11:29:34.344688] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.070 [2024-07-15 11:29:34.344721] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.070 2024/07/15 11:29:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:57.070 [2024-07-15 11:29:34.355241] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.070 [2024-07-15 11:29:34.355275] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.070 2024/07/15 11:29:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:57.070 [2024-07-15 11:29:34.370384] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.070 [2024-07-15 11:29:34.370423] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.070 2024/07/15 11:29:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:57.070 [2024-07-15 11:29:34.386303] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.070 [2024-07-15 11:29:34.386340] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.070 2024/07/15 11:29:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:57.070 [2024-07-15 11:29:34.402940] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.070 [2024-07-15 11:29:34.403000] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.070 2024/07/15 11:29:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:57.070 [2024-07-15 11:29:34.420376] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.070 [2024-07-15 11:29:34.420435] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.070 2024/07/15 11:29:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:57.070 [2024-07-15 11:29:34.435520] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.070 [2024-07-15 11:29:34.435570] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.070 2024/07/15 11:29:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:57.070 [2024-07-15 11:29:34.452385] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.070 [2024-07-15 11:29:34.452422] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.070 2024/07/15 11:29:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:57.070 [2024-07-15 11:29:34.467579] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.070 [2024-07-15 11:29:34.467615] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.070 2024/07/15 11:29:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:57.070 [2024-07-15 11:29:34.484233] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.070 [2024-07-15 11:29:34.484269] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.070 2024/07/15 11:29:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:57.070 [2024-07-15 11:29:34.499758] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.070 [2024-07-15 11:29:34.499791] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.070 2024/07/15 11:29:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:57.070 [2024-07-15 11:29:34.510231] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.070 [2024-07-15 11:29:34.510265] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.070 2024/07/15 11:29:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:57.070 [2024-07-15 11:29:34.525227] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.070 [2024-07-15 11:29:34.525260] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.070 2024/07/15 11:29:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:57.070 [2024-07-15 11:29:34.542004] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.070 [2024-07-15 11:29:34.542043] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.329 2024/07/15 11:29:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:57.329 [2024-07-15 11:29:34.558198] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.329 [2024-07-15 11:29:34.558232] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.329 2024/07/15 11:29:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:57.329 [2024-07-15 11:29:34.575175] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.329 [2024-07-15 11:29:34.575229] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.329 2024/07/15 11:29:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:57.329 [2024-07-15 11:29:34.590589] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.329 [2024-07-15 11:29:34.590624] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.329 2024/07/15 11:29:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:57.329 [2024-07-15 11:29:34.600592] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.329 [2024-07-15 11:29:34.600625] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.329 2024/07/15 11:29:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:57.329 [2024-07-15 11:29:34.612198] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.329 [2024-07-15 11:29:34.612233] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.329 2024/07/15 11:29:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:57.329 [2024-07-15 11:29:34.628051] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.329 [2024-07-15 11:29:34.628106] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.329 2024/07/15 11:29:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:57.329 [2024-07-15 11:29:34.643803] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.329 [2024-07-15 11:29:34.643837] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.329 2024/07/15 11:29:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:57.329 [2024-07-15 11:29:34.661706] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.329 [2024-07-15 11:29:34.661744] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.329 2024/07/15 11:29:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:57.329 [2024-07-15 11:29:34.677301] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.329 [2024-07-15 11:29:34.677339] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.329 2024/07/15 11:29:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:57.329 [2024-07-15 11:29:34.694279] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.329 [2024-07-15 11:29:34.694355] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.329 2024/07/15 11:29:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:57.329 [2024-07-15 11:29:34.708270] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.329 [2024-07-15 11:29:34.708325] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.329 2024/07/15 11:29:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:57.329 [2024-07-15 11:29:34.726433] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.329 [2024-07-15 11:29:34.726483] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.329 2024/07/15 11:29:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:57.329 [2024-07-15 11:29:34.743137] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.329 [2024-07-15 11:29:34.743193] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.329 2024/07/15 11:29:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:57.329 [2024-07-15 11:29:34.756842] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.329 [2024-07-15 11:29:34.756893] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.329 2024/07/15 11:29:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:57.329 [2024-07-15 11:29:34.778396] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.329 [2024-07-15 11:29:34.778444] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.329 2024/07/15 11:29:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:57.329 [2024-07-15 11:29:34.796741] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.329 [2024-07-15 11:29:34.796787] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.329 2024/07/15 11:29:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:57.587 [2024-07-15 11:29:34.814040] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.587 [2024-07-15 11:29:34.814085] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.587 2024/07/15 11:29:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:57.587 [2024-07-15 11:29:34.832020] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.587 [2024-07-15 11:29:34.832079] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.587 2024/07/15 11:29:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:57.587 [2024-07-15 11:29:34.849798] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.587 [2024-07-15 11:29:34.849839] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.587 2024/07/15 11:29:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:57.587 [2024-07-15 11:29:34.866631] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.587 [2024-07-15 11:29:34.866673] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.587 2024/07/15 11:29:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:57.587 [2024-07-15 11:29:34.885865] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.587 [2024-07-15 11:29:34.885950] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.587 2024/07/15 11:29:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:57.587 [2024-07-15 11:29:34.903359] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.587 [2024-07-15 11:29:34.903406] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.587 2024/07/15 11:29:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:57.587 [2024-07-15 11:29:34.921368] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.587 [2024-07-15 11:29:34.921416] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.588 2024/07/15 11:29:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:57.588 [2024-07-15 11:29:34.938389] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.588 [2024-07-15 11:29:34.938438] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.588 2024/07/15 11:29:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:57.588 [2024-07-15 11:29:34.956121] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.588 [2024-07-15 11:29:34.956168] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.588 2024/07/15 11:29:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:57.588 [2024-07-15 11:29:34.972988] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.588 [2024-07-15 11:29:34.973038] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.588 2024/07/15 11:29:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:57.588 [2024-07-15 11:29:34.990986] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.588 [2024-07-15 11:29:34.991053] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.588 2024/07/15 11:29:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:57.588 [2024-07-15 11:29:35.009224] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.588 [2024-07-15 11:29:35.009273] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.588 2024/07/15 11:29:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:57.588 [2024-07-15 11:29:35.026229] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.588 [2024-07-15 11:29:35.026271] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.588 2024/07/15 11:29:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:57.588 [2024-07-15 11:29:35.043047] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.588 [2024-07-15 11:29:35.043090] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.588 2024/07/15 11:29:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:57.588 [2024-07-15 11:29:35.061167] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.588 [2024-07-15 11:29:35.061210] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.847 2024/07/15 11:29:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:57.847 [2024-07-15 11:29:35.079308] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.847 [2024-07-15 11:29:35.079352] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.847 2024/07/15 11:29:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:57.847 [2024-07-15 11:29:35.096412] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.847 [2024-07-15 11:29:35.096455] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.847 2024/07/15 11:29:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:57.847 [2024-07-15 11:29:35.115126] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.847 [2024-07-15 11:29:35.115170] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.847 2024/07/15 11:29:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:57.847 [2024-07-15 11:29:35.133328] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.847 [2024-07-15 11:29:35.133370] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.847 2024/07/15 11:29:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:57.847 [2024-07-15 11:29:35.149934] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.847 [2024-07-15 11:29:35.149974] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.847 2024/07/15 11:29:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:57.847 [2024-07-15 11:29:35.162289] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.847 [2024-07-15 11:29:35.162328] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.847 2024/07/15 11:29:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:57.847 [2024-07-15 11:29:35.176244] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.847 [2024-07-15 11:29:35.176285] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.847 2024/07/15 11:29:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:57.847 [2024-07-15 11:29:35.194403] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.847 [2024-07-15 11:29:35.194443] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.847 2024/07/15 11:29:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:57.847 [2024-07-15 11:29:35.209813] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.847 [2024-07-15 11:29:35.209859] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.847 2024/07/15 11:29:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:57.847 [2024-07-15 11:29:35.226932] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.847 [2024-07-15 11:29:35.226996] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.847 2024/07/15 11:29:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:57.847 [2024-07-15 11:29:35.244830] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.848 [2024-07-15 11:29:35.244870] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.848 2024/07/15 11:29:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:57.848 [2024-07-15 11:29:35.259070] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.848 [2024-07-15 11:29:35.259105] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.848 2024/07/15 11:29:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:57.848 [2024-07-15 11:29:35.275247] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.848 [2024-07-15 11:29:35.275282] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.848 2024/07/15 11:29:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:57.848 [2024-07-15 11:29:35.291798] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.848 [2024-07-15 11:29:35.291836] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.848 2024/07/15 11:29:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:57.848 00:10:57.848 Latency(us) 00:10:57.848 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:57.848 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:10:57.848 Nvme1n1 : 5.01 11114.02 86.83 0.00 0.00 11500.66 4289.63 25022.84 00:10:57.848 =================================================================================================================== 00:10:57.848 Total : 11114.02 86.83 0.00 0.00 11500.66 4289.63 25022.84 00:10:57.848 [2024-07-15 11:29:35.300116] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.848 [2024-07-15 11:29:35.300150] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.848 2024/07/15 11:29:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:57.848 [2024-07-15 11:29:35.312153] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.848 [2024-07-15 11:29:35.312190] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.848 2024/07/15 11:29:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:58.106 [2024-07-15 11:29:35.324181] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.106 [2024-07-15 11:29:35.324230] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.106 2024/07/15 11:29:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:58.106 [2024-07-15 11:29:35.336230] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.106 [2024-07-15 11:29:35.336290] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.106 2024/07/15 11:29:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:58.106 [2024-07-15 11:29:35.348189] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.106 [2024-07-15 11:29:35.348234] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.106 2024/07/15 11:29:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:58.106 [2024-07-15 11:29:35.360188] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.106 [2024-07-15 11:29:35.360232] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.106 2024/07/15 11:29:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:58.106 [2024-07-15 11:29:35.372172] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.106 [2024-07-15 11:29:35.372212] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.106 2024/07/15 11:29:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:58.106 [2024-07-15 11:29:35.384158] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.106 [2024-07-15 11:29:35.384192] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.106 2024/07/15 11:29:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:58.106 [2024-07-15 11:29:35.396191] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.106 [2024-07-15 11:29:35.396236] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.106 2024/07/15 11:29:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:58.106 [2024-07-15 11:29:35.408192] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.106 [2024-07-15 11:29:35.408231] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.106 2024/07/15 11:29:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:58.106 [2024-07-15 11:29:35.420198] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.106 [2024-07-15 11:29:35.420242] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.106 2024/07/15 11:29:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:58.106 [2024-07-15 11:29:35.432199] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.106 [2024-07-15 11:29:35.432236] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.106 2024/07/15 11:29:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:58.106 [2024-07-15 11:29:35.444166] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.106 [2024-07-15 11:29:35.444197] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.106 2024/07/15 11:29:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:58.106 [2024-07-15 11:29:35.456169] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.106 [2024-07-15 11:29:35.456197] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.106 2024/07/15 11:29:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:58.106 /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (76199) - No such process 00:10:58.106 11:29:35 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 76199 00:10:58.106 11:29:35 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:58.106 11:29:35 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:58.106 11:29:35 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:58.106 11:29:35 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:58.106 11:29:35 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:10:58.106 11:29:35 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:58.106 11:29:35 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:58.106 delay0 00:10:58.106 11:29:35 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:58.106 11:29:35 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:10:58.106 11:29:35 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:58.106 11:29:35 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:58.106 11:29:35 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:58.106 11:29:35 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:10:58.364 [2024-07-15 11:29:35.656540] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:11:04.929 Initializing NVMe Controllers 00:11:04.929 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:11:04.929 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:11:04.929 Initialization complete. Launching workers. 00:11:04.929 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 58 00:11:04.929 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 345, failed to submit 33 00:11:04.929 success 143, unsuccess 202, failed 0 00:11:04.929 11:29:41 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:11:04.929 11:29:41 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:11:04.929 11:29:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:04.929 11:29:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@117 -- # sync 00:11:04.929 11:29:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:04.929 11:29:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@120 -- # set +e 00:11:04.929 11:29:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:04.929 11:29:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:04.929 rmmod nvme_tcp 00:11:04.929 rmmod nvme_fabrics 00:11:04.929 rmmod nvme_keyring 00:11:04.929 11:29:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:04.929 11:29:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@124 -- # set -e 00:11:04.929 11:29:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@125 -- # return 0 00:11:04.929 11:29:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@489 -- # '[' -n 76042 ']' 00:11:04.929 11:29:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@490 -- # killprocess 76042 00:11:04.929 11:29:41 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@948 -- # '[' -z 76042 ']' 00:11:04.929 11:29:41 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@952 -- # kill -0 76042 00:11:04.929 11:29:41 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@953 -- # uname 00:11:04.929 11:29:41 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:11:04.929 11:29:41 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 76042 00:11:04.929 11:29:41 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:11:04.929 11:29:41 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:11:04.929 killing process with pid 76042 00:11:04.929 11:29:41 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@966 -- # echo 'killing process with pid 76042' 00:11:04.929 11:29:41 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@967 -- # kill 76042 00:11:04.929 11:29:41 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@972 -- # wait 76042 00:11:04.929 11:29:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:04.929 11:29:42 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:11:04.929 11:29:42 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:11:04.929 11:29:42 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:04.929 11:29:42 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:04.929 11:29:42 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:04.929 11:29:42 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:04.929 11:29:42 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:04.929 11:29:42 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:11:04.929 00:11:04.929 real 0m23.685s 00:11:04.929 user 0m38.988s 00:11:04.929 sys 0m6.191s 00:11:04.929 11:29:42 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:04.929 11:29:42 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:04.929 ************************************ 00:11:04.929 END TEST nvmf_zcopy 00:11:04.929 ************************************ 00:11:04.929 11:29:42 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:11:04.929 11:29:42 nvmf_tcp -- nvmf/nvmf.sh@54 -- # run_test nvmf_nmic /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:11:04.929 11:29:42 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:11:04.929 11:29:42 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:04.929 11:29:42 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:11:04.929 ************************************ 00:11:04.929 START TEST nvmf_nmic 00:11:04.929 ************************************ 00:11:04.929 11:29:42 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:11:04.929 * Looking for test storage... 00:11:04.929 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:11:04.929 11:29:42 nvmf_tcp.nvmf_nmic -- target/nmic.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:11:04.929 11:29:42 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:11:04.929 11:29:42 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:04.929 11:29:42 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:04.929 11:29:42 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:04.929 11:29:42 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:04.929 11:29:42 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:04.929 11:29:42 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:04.929 11:29:42 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:04.929 11:29:42 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:04.929 11:29:42 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:04.929 11:29:42 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:04.929 11:29:42 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:891080d4-f96c-4735-b9e2-e3ce9892e421 00:11:04.929 11:29:42 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=891080d4-f96c-4735-b9e2-e3ce9892e421 00:11:04.929 11:29:42 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:04.929 11:29:42 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:04.929 11:29:42 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:11:04.929 11:29:42 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:04.929 11:29:42 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:04.929 11:29:42 nvmf_tcp.nvmf_nmic -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:04.929 11:29:42 nvmf_tcp.nvmf_nmic -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:04.929 11:29:42 nvmf_tcp.nvmf_nmic -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:04.929 11:29:42 nvmf_tcp.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:04.929 11:29:42 nvmf_tcp.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:04.929 11:29:42 nvmf_tcp.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:04.929 11:29:42 nvmf_tcp.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:11:04.930 11:29:42 nvmf_tcp.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:04.930 11:29:42 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@47 -- # : 0 00:11:04.930 11:29:42 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:04.930 11:29:42 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:04.930 11:29:42 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:04.930 11:29:42 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:04.930 11:29:42 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:04.930 11:29:42 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:04.930 11:29:42 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:04.930 11:29:42 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:04.930 11:29:42 nvmf_tcp.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:04.930 11:29:42 nvmf_tcp.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:04.930 11:29:42 nvmf_tcp.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:11:04.930 11:29:42 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:11:04.930 11:29:42 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:04.930 11:29:42 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:04.930 11:29:42 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:04.930 11:29:42 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:04.930 11:29:42 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:04.930 11:29:42 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:04.930 11:29:42 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:04.930 11:29:42 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:11:04.930 11:29:42 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:11:04.930 11:29:42 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:11:04.930 11:29:42 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:11:04.930 11:29:42 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:11:04.930 11:29:42 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@432 -- # nvmf_veth_init 00:11:04.930 11:29:42 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:04.930 11:29:42 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:04.930 11:29:42 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:11:04.930 11:29:42 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:11:04.930 11:29:42 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:11:04.930 11:29:42 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:11:04.930 11:29:42 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:11:04.930 11:29:42 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:04.930 11:29:42 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:11:04.930 11:29:42 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:11:04.930 11:29:42 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:11:04.930 11:29:42 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:11:04.930 11:29:42 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:11:04.930 11:29:42 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:11:04.930 Cannot find device "nvmf_tgt_br" 00:11:04.930 11:29:42 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@155 -- # true 00:11:04.930 11:29:42 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:11:04.930 Cannot find device "nvmf_tgt_br2" 00:11:04.930 11:29:42 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@156 -- # true 00:11:04.930 11:29:42 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:11:04.930 11:29:42 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:11:04.930 Cannot find device "nvmf_tgt_br" 00:11:04.930 11:29:42 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@158 -- # true 00:11:04.930 11:29:42 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:11:04.930 Cannot find device "nvmf_tgt_br2" 00:11:04.930 11:29:42 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@159 -- # true 00:11:04.930 11:29:42 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:11:04.930 11:29:42 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:11:04.930 11:29:42 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:04.930 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:04.930 11:29:42 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@162 -- # true 00:11:04.930 11:29:42 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:04.930 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:04.930 11:29:42 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@163 -- # true 00:11:04.930 11:29:42 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:11:04.930 11:29:42 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:11:04.930 11:29:42 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:11:04.930 11:29:42 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:11:04.930 11:29:42 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:11:04.930 11:29:42 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:11:04.930 11:29:42 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:11:04.930 11:29:42 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:11:04.930 11:29:42 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:11:04.930 11:29:42 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:11:04.930 11:29:42 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:11:04.930 11:29:42 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:11:04.930 11:29:42 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:11:04.930 11:29:42 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:11:04.930 11:29:42 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:11:04.930 11:29:42 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:11:05.189 11:29:42 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:11:05.189 11:29:42 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:11:05.189 11:29:42 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:11:05.189 11:29:42 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:11:05.189 11:29:42 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:11:05.189 11:29:42 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:11:05.189 11:29:42 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:11:05.189 11:29:42 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:11:05.189 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:05.189 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.082 ms 00:11:05.189 00:11:05.189 --- 10.0.0.2 ping statistics --- 00:11:05.189 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:05.189 rtt min/avg/max/mdev = 0.082/0.082/0.082/0.000 ms 00:11:05.189 11:29:42 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:11:05.189 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:11:05.189 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.064 ms 00:11:05.189 00:11:05.189 --- 10.0.0.3 ping statistics --- 00:11:05.189 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:05.189 rtt min/avg/max/mdev = 0.064/0.064/0.064/0.000 ms 00:11:05.189 11:29:42 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:11:05.189 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:05.189 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.046 ms 00:11:05.189 00:11:05.189 --- 10.0.0.1 ping statistics --- 00:11:05.189 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:05.189 rtt min/avg/max/mdev = 0.046/0.046/0.046/0.000 ms 00:11:05.189 11:29:42 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:05.189 11:29:42 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@433 -- # return 0 00:11:05.189 11:29:42 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:05.189 11:29:42 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:05.189 11:29:42 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:11:05.189 11:29:42 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:11:05.189 11:29:42 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:05.189 11:29:42 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:11:05.189 11:29:42 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:11:05.189 11:29:42 nvmf_tcp.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:11:05.189 11:29:42 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:05.189 11:29:42 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@722 -- # xtrace_disable 00:11:05.189 11:29:42 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:05.189 11:29:42 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@481 -- # nvmfpid=76523 00:11:05.189 11:29:42 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:05.189 11:29:42 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@482 -- # waitforlisten 76523 00:11:05.189 11:29:42 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@829 -- # '[' -z 76523 ']' 00:11:05.189 11:29:42 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:05.189 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:05.189 11:29:42 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:05.189 11:29:42 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:05.189 11:29:42 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:05.189 11:29:42 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:05.189 [2024-07-15 11:29:42.594513] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:11:05.189 [2024-07-15 11:29:42.594694] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:05.448 [2024-07-15 11:29:42.735690] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:05.448 [2024-07-15 11:29:42.806877] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:05.448 [2024-07-15 11:29:42.806931] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:05.448 [2024-07-15 11:29:42.806943] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:05.448 [2024-07-15 11:29:42.806952] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:05.448 [2024-07-15 11:29:42.806959] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:05.448 [2024-07-15 11:29:42.807038] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:05.448 [2024-07-15 11:29:42.807324] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:11:05.448 [2024-07-15 11:29:42.807778] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:11:05.448 [2024-07-15 11:29:42.807798] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:06.385 11:29:43 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:06.385 11:29:43 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@862 -- # return 0 00:11:06.385 11:29:43 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:06.385 11:29:43 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@728 -- # xtrace_disable 00:11:06.385 11:29:43 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:06.385 11:29:43 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:06.385 11:29:43 nvmf_tcp.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:06.385 11:29:43 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:06.385 11:29:43 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:06.385 [2024-07-15 11:29:43.643331] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:06.385 11:29:43 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:06.385 11:29:43 nvmf_tcp.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:11:06.385 11:29:43 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:06.385 11:29:43 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:06.385 Malloc0 00:11:06.385 11:29:43 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:06.385 11:29:43 nvmf_tcp.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:06.385 11:29:43 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:06.385 11:29:43 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:06.385 11:29:43 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:06.385 11:29:43 nvmf_tcp.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:06.385 11:29:43 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:06.385 11:29:43 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:06.385 11:29:43 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:06.385 11:29:43 nvmf_tcp.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:06.385 11:29:43 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:06.385 11:29:43 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:06.385 [2024-07-15 11:29:43.699689] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:06.385 11:29:43 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:06.385 test case1: single bdev can't be used in multiple subsystems 00:11:06.385 11:29:43 nvmf_tcp.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:11:06.385 11:29:43 nvmf_tcp.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:11:06.385 11:29:43 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:06.385 11:29:43 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:06.385 11:29:43 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:06.385 11:29:43 nvmf_tcp.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:11:06.385 11:29:43 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:06.385 11:29:43 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:06.385 11:29:43 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:06.385 11:29:43 nvmf_tcp.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:11:06.385 11:29:43 nvmf_tcp.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:11:06.385 11:29:43 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:06.385 11:29:43 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:06.385 [2024-07-15 11:29:43.723627] bdev.c:8078:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:11:06.385 [2024-07-15 11:29:43.723696] subsystem.c:2083:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:11:06.385 [2024-07-15 11:29:43.723717] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:06.385 2024/07/15 11:29:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:Malloc0 no_auto_visible:%!s(bool=false)] nqn:nqn.2016-06.io.spdk:cnode2], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:06.385 request: 00:11:06.385 { 00:11:06.385 "method": "nvmf_subsystem_add_ns", 00:11:06.385 "params": { 00:11:06.385 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:11:06.385 "namespace": { 00:11:06.385 "bdev_name": "Malloc0", 00:11:06.385 "no_auto_visible": false 00:11:06.385 } 00:11:06.385 } 00:11:06.385 } 00:11:06.385 Got JSON-RPC error response 00:11:06.385 GoRPCClient: error on JSON-RPC call 00:11:06.385 11:29:43 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:11:06.385 11:29:43 nvmf_tcp.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:11:06.385 11:29:43 nvmf_tcp.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:11:06.385 Adding namespace failed - expected result. 00:11:06.385 11:29:43 nvmf_tcp.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:11:06.385 test case2: host connect to nvmf target in multiple paths 00:11:06.385 11:29:43 nvmf_tcp.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:11:06.385 11:29:43 nvmf_tcp.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:11:06.385 11:29:43 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:06.385 11:29:43 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:06.385 [2024-07-15 11:29:43.735870] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:11:06.385 11:29:43 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:06.385 11:29:43 nvmf_tcp.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:891080d4-f96c-4735-b9e2-e3ce9892e421 --hostid=891080d4-f96c-4735-b9e2-e3ce9892e421 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:06.643 11:29:43 nvmf_tcp.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:891080d4-f96c-4735-b9e2-e3ce9892e421 --hostid=891080d4-f96c-4735-b9e2-e3ce9892e421 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:11:06.643 11:29:44 nvmf_tcp.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:11:06.643 11:29:44 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1198 -- # local i=0 00:11:06.643 11:29:44 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:11:06.643 11:29:44 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:11:06.643 11:29:44 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1205 -- # sleep 2 00:11:09.201 11:29:46 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:11:09.201 11:29:46 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:11:09.201 11:29:46 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:11:09.201 11:29:46 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:11:09.201 11:29:46 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:11:09.201 11:29:46 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1208 -- # return 0 00:11:09.201 11:29:46 nvmf_tcp.nvmf_nmic -- target/nmic.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:11:09.201 [global] 00:11:09.201 thread=1 00:11:09.201 invalidate=1 00:11:09.201 rw=write 00:11:09.201 time_based=1 00:11:09.201 runtime=1 00:11:09.201 ioengine=libaio 00:11:09.201 direct=1 00:11:09.201 bs=4096 00:11:09.201 iodepth=1 00:11:09.201 norandommap=0 00:11:09.201 numjobs=1 00:11:09.201 00:11:09.201 verify_dump=1 00:11:09.201 verify_backlog=512 00:11:09.201 verify_state_save=0 00:11:09.201 do_verify=1 00:11:09.201 verify=crc32c-intel 00:11:09.201 [job0] 00:11:09.201 filename=/dev/nvme0n1 00:11:09.201 Could not set queue depth (nvme0n1) 00:11:09.201 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:09.201 fio-3.35 00:11:09.201 Starting 1 thread 00:11:10.137 00:11:10.137 job0: (groupid=0, jobs=1): err= 0: pid=76633: Mon Jul 15 11:29:47 2024 00:11:10.137 read: IOPS=2673, BW=10.4MiB/s (10.9MB/s)(10.5MiB/1001msec) 00:11:10.137 slat (nsec): min=19203, max=69857, avg=27859.41, stdev=5176.53 00:11:10.137 clat (usec): min=135, max=266, avg=163.45, stdev=11.63 00:11:10.137 lat (usec): min=165, max=296, avg=191.31, stdev=13.04 00:11:10.137 clat percentiles (usec): 00:11:10.137 | 1.00th=[ 149], 5.00th=[ 151], 10.00th=[ 153], 20.00th=[ 155], 00:11:10.137 | 30.00th=[ 157], 40.00th=[ 159], 50.00th=[ 161], 60.00th=[ 163], 00:11:10.137 | 70.00th=[ 167], 80.00th=[ 172], 90.00th=[ 178], 95.00th=[ 184], 00:11:10.137 | 99.00th=[ 208], 99.50th=[ 223], 99.90th=[ 249], 99.95th=[ 265], 00:11:10.137 | 99.99th=[ 269] 00:11:10.137 write: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec); 0 zone resets 00:11:10.137 slat (nsec): min=22512, max=99217, avg=38618.54, stdev=7528.26 00:11:10.137 clat (usec): min=81, max=550, avg=114.64, stdev=14.15 00:11:10.137 lat (usec): min=124, max=592, avg=153.26, stdev=17.23 00:11:10.137 clat percentiles (usec): 00:11:10.137 | 1.00th=[ 99], 5.00th=[ 102], 10.00th=[ 104], 20.00th=[ 106], 00:11:10.137 | 30.00th=[ 109], 40.00th=[ 111], 50.00th=[ 113], 60.00th=[ 115], 00:11:10.137 | 70.00th=[ 118], 80.00th=[ 121], 90.00th=[ 128], 95.00th=[ 135], 00:11:10.137 | 99.00th=[ 157], 99.50th=[ 167], 99.90th=[ 231], 99.95th=[ 265], 00:11:10.137 | 99.99th=[ 553] 00:11:10.137 bw ( KiB/s): min=12288, max=12288, per=100.00%, avg=12288.00, stdev= 0.00, samples=1 00:11:10.137 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:11:10.137 lat (usec) : 100=1.22%, 250=98.71%, 500=0.05%, 750=0.02% 00:11:10.137 cpu : usr=3.40%, sys=14.60%, ctx=5748, majf=0, minf=2 00:11:10.137 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:10.137 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:10.137 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:10.137 issued rwts: total=2676,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:10.137 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:10.137 00:11:10.137 Run status group 0 (all jobs): 00:11:10.137 READ: bw=10.4MiB/s (10.9MB/s), 10.4MiB/s-10.4MiB/s (10.9MB/s-10.9MB/s), io=10.5MiB (11.0MB), run=1001-1001msec 00:11:10.137 WRITE: bw=12.0MiB/s (12.6MB/s), 12.0MiB/s-12.0MiB/s (12.6MB/s-12.6MB/s), io=12.0MiB (12.6MB), run=1001-1001msec 00:11:10.137 00:11:10.137 Disk stats (read/write): 00:11:10.137 nvme0n1: ios=2610/2599, merge=0/0, ticks=451/348, in_queue=799, util=91.68% 00:11:10.137 11:29:47 nvmf_tcp.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:10.137 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:11:10.137 11:29:47 nvmf_tcp.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:10.137 11:29:47 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1219 -- # local i=0 00:11:10.137 11:29:47 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:11:10.137 11:29:47 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:10.137 11:29:47 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:11:10.137 11:29:47 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:10.137 11:29:47 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1231 -- # return 0 00:11:10.137 11:29:47 nvmf_tcp.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:11:10.137 11:29:47 nvmf_tcp.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:11:10.137 11:29:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:10.137 11:29:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@117 -- # sync 00:11:10.137 11:29:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:10.137 11:29:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@120 -- # set +e 00:11:10.137 11:29:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:10.137 11:29:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:10.137 rmmod nvme_tcp 00:11:10.137 rmmod nvme_fabrics 00:11:10.137 rmmod nvme_keyring 00:11:10.137 11:29:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:10.137 11:29:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@124 -- # set -e 00:11:10.137 11:29:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@125 -- # return 0 00:11:10.137 11:29:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@489 -- # '[' -n 76523 ']' 00:11:10.137 11:29:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@490 -- # killprocess 76523 00:11:10.137 11:29:47 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@948 -- # '[' -z 76523 ']' 00:11:10.137 11:29:47 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@952 -- # kill -0 76523 00:11:10.137 11:29:47 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@953 -- # uname 00:11:10.137 11:29:47 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:11:10.137 11:29:47 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 76523 00:11:10.137 11:29:47 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:11:10.137 11:29:47 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:11:10.137 11:29:47 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@966 -- # echo 'killing process with pid 76523' 00:11:10.137 killing process with pid 76523 00:11:10.137 11:29:47 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@967 -- # kill 76523 00:11:10.137 11:29:47 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@972 -- # wait 76523 00:11:10.395 11:29:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:10.395 11:29:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:11:10.395 11:29:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:11:10.395 11:29:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:10.395 11:29:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:10.395 11:29:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:10.395 11:29:47 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:10.395 11:29:47 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:10.395 11:29:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:11:10.395 00:11:10.395 real 0m5.676s 00:11:10.395 user 0m19.242s 00:11:10.395 sys 0m1.345s 00:11:10.395 11:29:47 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:10.395 11:29:47 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:10.395 ************************************ 00:11:10.395 END TEST nvmf_nmic 00:11:10.395 ************************************ 00:11:10.395 11:29:47 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:11:10.395 11:29:47 nvmf_tcp -- nvmf/nvmf.sh@55 -- # run_test nvmf_fio_target /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp 00:11:10.395 11:29:47 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:11:10.395 11:29:47 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:10.395 11:29:47 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:11:10.395 ************************************ 00:11:10.395 START TEST nvmf_fio_target 00:11:10.395 ************************************ 00:11:10.395 11:29:47 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp 00:11:10.395 * Looking for test storage... 00:11:10.395 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:11:10.395 11:29:47 nvmf_tcp.nvmf_fio_target -- target/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:11:10.395 11:29:47 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:11:10.654 11:29:47 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:10.654 11:29:47 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:10.654 11:29:47 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:10.654 11:29:47 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:10.654 11:29:47 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:10.654 11:29:47 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:10.654 11:29:47 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:10.654 11:29:47 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:10.654 11:29:47 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:10.654 11:29:47 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:10.654 11:29:47 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:891080d4-f96c-4735-b9e2-e3ce9892e421 00:11:10.654 11:29:47 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=891080d4-f96c-4735-b9e2-e3ce9892e421 00:11:10.654 11:29:47 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:10.654 11:29:47 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:10.654 11:29:47 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:11:10.654 11:29:47 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:10.654 11:29:47 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:10.654 11:29:47 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:10.654 11:29:47 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:10.654 11:29:47 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:10.654 11:29:47 nvmf_tcp.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:10.654 11:29:47 nvmf_tcp.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:10.654 11:29:47 nvmf_tcp.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:10.654 11:29:47 nvmf_tcp.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:11:10.655 11:29:47 nvmf_tcp.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:10.655 11:29:47 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@47 -- # : 0 00:11:10.655 11:29:47 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:10.655 11:29:47 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:10.655 11:29:47 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:10.655 11:29:47 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:10.655 11:29:47 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:10.655 11:29:47 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:10.655 11:29:47 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:10.655 11:29:47 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:10.655 11:29:47 nvmf_tcp.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:10.655 11:29:47 nvmf_tcp.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:10.655 11:29:47 nvmf_tcp.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:10.655 11:29:47 nvmf_tcp.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:11:10.655 11:29:47 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:11:10.655 11:29:47 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:10.655 11:29:47 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:10.655 11:29:47 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:10.655 11:29:47 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:10.655 11:29:47 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:10.655 11:29:47 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:10.655 11:29:47 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:10.655 11:29:47 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:11:10.655 11:29:47 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:11:10.655 11:29:47 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:11:10.655 11:29:47 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:11:10.655 11:29:47 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:11:10.655 11:29:47 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@432 -- # nvmf_veth_init 00:11:10.655 11:29:47 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:10.655 11:29:47 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:10.655 11:29:47 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:11:10.655 11:29:47 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:11:10.655 11:29:47 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:11:10.655 11:29:47 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:11:10.655 11:29:47 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:11:10.655 11:29:47 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:10.655 11:29:47 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:11:10.655 11:29:47 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:11:10.655 11:29:47 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:11:10.655 11:29:47 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:11:10.655 11:29:47 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:11:10.655 11:29:47 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:11:10.655 Cannot find device "nvmf_tgt_br" 00:11:10.655 11:29:47 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@155 -- # true 00:11:10.655 11:29:47 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:11:10.655 Cannot find device "nvmf_tgt_br2" 00:11:10.655 11:29:47 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@156 -- # true 00:11:10.655 11:29:47 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:11:10.655 11:29:47 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:11:10.655 Cannot find device "nvmf_tgt_br" 00:11:10.655 11:29:47 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@158 -- # true 00:11:10.655 11:29:47 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:11:10.655 Cannot find device "nvmf_tgt_br2" 00:11:10.655 11:29:47 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@159 -- # true 00:11:10.655 11:29:47 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:11:10.655 11:29:47 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:11:10.655 11:29:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:10.655 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:10.655 11:29:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@162 -- # true 00:11:10.655 11:29:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:10.655 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:10.655 11:29:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@163 -- # true 00:11:10.655 11:29:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:11:10.655 11:29:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:11:10.655 11:29:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:11:10.655 11:29:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:11:10.655 11:29:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:11:10.655 11:29:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:11:10.655 11:29:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:11:10.655 11:29:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:11:10.655 11:29:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:11:10.655 11:29:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:11:10.655 11:29:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:11:10.655 11:29:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:11:10.655 11:29:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:11:10.655 11:29:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:11:10.655 11:29:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:11:10.655 11:29:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:11:10.913 11:29:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:11:10.913 11:29:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:11:10.913 11:29:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:11:10.913 11:29:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:11:10.913 11:29:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:11:10.913 11:29:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:11:10.913 11:29:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:11:10.913 11:29:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:11:10.913 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:10.913 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.068 ms 00:11:10.913 00:11:10.913 --- 10.0.0.2 ping statistics --- 00:11:10.913 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:10.913 rtt min/avg/max/mdev = 0.068/0.068/0.068/0.000 ms 00:11:10.913 11:29:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:11:10.913 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:11:10.913 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.059 ms 00:11:10.913 00:11:10.913 --- 10.0.0.3 ping statistics --- 00:11:10.913 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:10.913 rtt min/avg/max/mdev = 0.059/0.059/0.059/0.000 ms 00:11:10.913 11:29:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:11:10.913 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:10.913 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:11:10.913 00:11:10.913 --- 10.0.0.1 ping statistics --- 00:11:10.913 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:10.913 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:11:10.913 11:29:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:10.913 11:29:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@433 -- # return 0 00:11:10.913 11:29:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:10.913 11:29:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:10.913 11:29:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:11:10.913 11:29:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:11:10.913 11:29:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:10.913 11:29:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:11:10.913 11:29:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:11:10.913 11:29:48 nvmf_tcp.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:11:10.913 11:29:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:10.914 11:29:48 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@722 -- # xtrace_disable 00:11:10.914 11:29:48 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:11:10.914 11:29:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@481 -- # nvmfpid=76810 00:11:10.914 11:29:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:10.914 11:29:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@482 -- # waitforlisten 76810 00:11:10.914 11:29:48 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@829 -- # '[' -z 76810 ']' 00:11:10.914 11:29:48 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:10.914 11:29:48 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:10.914 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:10.914 11:29:48 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:10.914 11:29:48 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:10.914 11:29:48 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:11:10.914 [2024-07-15 11:29:48.309721] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:11:10.914 [2024-07-15 11:29:48.309851] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:11.172 [2024-07-15 11:29:48.462258] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:11.172 [2024-07-15 11:29:48.550326] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:11.172 [2024-07-15 11:29:48.550394] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:11.172 [2024-07-15 11:29:48.550411] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:11.172 [2024-07-15 11:29:48.550423] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:11.172 [2024-07-15 11:29:48.550434] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:11.172 [2024-07-15 11:29:48.550565] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:11.172 [2024-07-15 11:29:48.550758] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:11:11.172 [2024-07-15 11:29:48.550955] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:11:11.172 [2024-07-15 11:29:48.550970] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:11.430 11:29:48 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:11.430 11:29:48 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@862 -- # return 0 00:11:11.430 11:29:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:11.430 11:29:48 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@728 -- # xtrace_disable 00:11:11.430 11:29:48 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:11:11.430 11:29:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:11.430 11:29:48 nvmf_tcp.nvmf_fio_target -- target/fio.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:11:11.430 [2024-07-15 11:29:48.901562] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:11.689 11:29:48 nvmf_tcp.nvmf_fio_target -- target/fio.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:11.947 11:29:49 nvmf_tcp.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:11:11.947 11:29:49 nvmf_tcp.nvmf_fio_target -- target/fio.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:12.205 11:29:49 nvmf_tcp.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:11:12.205 11:29:49 nvmf_tcp.nvmf_fio_target -- target/fio.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:12.464 11:29:49 nvmf_tcp.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:11:12.464 11:29:49 nvmf_tcp.nvmf_fio_target -- target/fio.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:12.722 11:29:50 nvmf_tcp.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:11:12.722 11:29:50 nvmf_tcp.nvmf_fio_target -- target/fio.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:11:12.980 11:29:50 nvmf_tcp.nvmf_fio_target -- target/fio.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:13.238 11:29:50 nvmf_tcp.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:11:13.238 11:29:50 nvmf_tcp.nvmf_fio_target -- target/fio.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:13.496 11:29:50 nvmf_tcp.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:11:13.496 11:29:50 nvmf_tcp.nvmf_fio_target -- target/fio.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:13.754 11:29:51 nvmf_tcp.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:11:13.754 11:29:51 nvmf_tcp.nvmf_fio_target -- target/fio.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:11:14.012 11:29:51 nvmf_tcp.nvmf_fio_target -- target/fio.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:14.270 11:29:51 nvmf_tcp.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:11:14.270 11:29:51 nvmf_tcp.nvmf_fio_target -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:14.906 11:29:52 nvmf_tcp.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:11:14.906 11:29:52 nvmf_tcp.nvmf_fio_target -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:14.906 11:29:52 nvmf_tcp.nvmf_fio_target -- target/fio.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:15.164 [2024-07-15 11:29:52.572267] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:15.164 11:29:52 nvmf_tcp.nvmf_fio_target -- target/fio.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:11:15.729 11:29:52 nvmf_tcp.nvmf_fio_target -- target/fio.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:11:15.985 11:29:53 nvmf_tcp.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:891080d4-f96c-4735-b9e2-e3ce9892e421 --hostid=891080d4-f96c-4735-b9e2-e3ce9892e421 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:15.985 11:29:53 nvmf_tcp.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:11:15.985 11:29:53 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1198 -- # local i=0 00:11:15.986 11:29:53 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:11:15.986 11:29:53 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1200 -- # [[ -n 4 ]] 00:11:15.986 11:29:53 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1201 -- # nvme_device_counter=4 00:11:15.986 11:29:53 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1205 -- # sleep 2 00:11:18.514 11:29:55 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:11:18.514 11:29:55 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:11:18.514 11:29:55 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:11:18.514 11:29:55 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1207 -- # nvme_devices=4 00:11:18.514 11:29:55 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:11:18.514 11:29:55 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1208 -- # return 0 00:11:18.514 11:29:55 nvmf_tcp.nvmf_fio_target -- target/fio.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:11:18.514 [global] 00:11:18.514 thread=1 00:11:18.514 invalidate=1 00:11:18.514 rw=write 00:11:18.514 time_based=1 00:11:18.514 runtime=1 00:11:18.514 ioengine=libaio 00:11:18.514 direct=1 00:11:18.514 bs=4096 00:11:18.514 iodepth=1 00:11:18.514 norandommap=0 00:11:18.514 numjobs=1 00:11:18.514 00:11:18.514 verify_dump=1 00:11:18.514 verify_backlog=512 00:11:18.514 verify_state_save=0 00:11:18.514 do_verify=1 00:11:18.514 verify=crc32c-intel 00:11:18.514 [job0] 00:11:18.514 filename=/dev/nvme0n1 00:11:18.514 [job1] 00:11:18.514 filename=/dev/nvme0n2 00:11:18.514 [job2] 00:11:18.514 filename=/dev/nvme0n3 00:11:18.514 [job3] 00:11:18.514 filename=/dev/nvme0n4 00:11:18.514 Could not set queue depth (nvme0n1) 00:11:18.514 Could not set queue depth (nvme0n2) 00:11:18.514 Could not set queue depth (nvme0n3) 00:11:18.514 Could not set queue depth (nvme0n4) 00:11:18.514 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:18.514 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:18.514 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:18.514 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:18.514 fio-3.35 00:11:18.514 Starting 4 threads 00:11:19.449 00:11:19.449 job0: (groupid=0, jobs=1): err= 0: pid=77096: Mon Jul 15 11:29:56 2024 00:11:19.449 read: IOPS=1436, BW=5746KiB/s (5884kB/s)(5752KiB/1001msec) 00:11:19.449 slat (nsec): min=16284, max=76266, avg=31497.86, stdev=8659.87 00:11:19.449 clat (usec): min=221, max=910, avg=353.67, stdev=68.77 00:11:19.449 lat (usec): min=252, max=948, avg=385.17, stdev=70.50 00:11:19.449 clat percentiles (usec): 00:11:19.449 | 1.00th=[ 255], 5.00th=[ 277], 10.00th=[ 285], 20.00th=[ 297], 00:11:19.449 | 30.00th=[ 306], 40.00th=[ 322], 50.00th=[ 338], 60.00th=[ 355], 00:11:19.449 | 70.00th=[ 371], 80.00th=[ 408], 90.00th=[ 453], 95.00th=[ 482], 00:11:19.449 | 99.00th=[ 562], 99.50th=[ 603], 99.90th=[ 816], 99.95th=[ 914], 00:11:19.449 | 99.99th=[ 914] 00:11:19.449 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:11:19.449 slat (usec): min=25, max=130, avg=45.27, stdev=10.67 00:11:19.449 clat (usec): min=124, max=3736, avg=238.53, stdev=108.13 00:11:19.449 lat (usec): min=170, max=3781, avg=283.80, stdev=108.18 00:11:19.449 clat percentiles (usec): 00:11:19.449 | 1.00th=[ 161], 5.00th=[ 184], 10.00th=[ 190], 20.00th=[ 198], 00:11:19.449 | 30.00th=[ 204], 40.00th=[ 210], 50.00th=[ 221], 60.00th=[ 233], 00:11:19.449 | 70.00th=[ 251], 80.00th=[ 265], 90.00th=[ 306], 95.00th=[ 343], 00:11:19.449 | 99.00th=[ 392], 99.50th=[ 490], 99.90th=[ 1303], 99.95th=[ 3752], 00:11:19.449 | 99.99th=[ 3752] 00:11:19.449 bw ( KiB/s): min= 8192, max= 8192, per=22.38%, avg=8192.00, stdev= 0.00, samples=1 00:11:19.449 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:11:19.449 lat (usec) : 250=36.38%, 500=61.97%, 750=1.48%, 1000=0.10% 00:11:19.449 lat (msec) : 2=0.03%, 4=0.03% 00:11:19.449 cpu : usr=1.90%, sys=8.90%, ctx=2983, majf=0, minf=7 00:11:19.449 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:19.449 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:19.449 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:19.449 issued rwts: total=1438,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:19.449 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:19.449 job1: (groupid=0, jobs=1): err= 0: pid=77097: Mon Jul 15 11:29:56 2024 00:11:19.449 read: IOPS=2656, BW=10.4MiB/s (10.9MB/s)(10.4MiB/1001msec) 00:11:19.449 slat (nsec): min=14822, max=62138, avg=17984.57, stdev=3972.15 00:11:19.449 clat (usec): min=148, max=426, avg=172.28, stdev=11.59 00:11:19.449 lat (usec): min=164, max=441, avg=190.26, stdev=12.55 00:11:19.449 clat percentiles (usec): 00:11:19.449 | 1.00th=[ 153], 5.00th=[ 157], 10.00th=[ 161], 20.00th=[ 163], 00:11:19.449 | 30.00th=[ 167], 40.00th=[ 169], 50.00th=[ 172], 60.00th=[ 174], 00:11:19.449 | 70.00th=[ 178], 80.00th=[ 182], 90.00th=[ 186], 95.00th=[ 192], 00:11:19.449 | 99.00th=[ 202], 99.50th=[ 206], 99.90th=[ 219], 99.95th=[ 223], 00:11:19.449 | 99.99th=[ 429] 00:11:19.449 write: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec); 0 zone resets 00:11:19.449 slat (usec): min=20, max=126, avg=27.09, stdev= 7.69 00:11:19.449 clat (usec): min=102, max=462, avg=129.95, stdev=12.14 00:11:19.449 lat (usec): min=125, max=485, avg=157.05, stdev=15.79 00:11:19.449 clat percentiles (usec): 00:11:19.449 | 1.00th=[ 112], 5.00th=[ 116], 10.00th=[ 118], 20.00th=[ 122], 00:11:19.449 | 30.00th=[ 125], 40.00th=[ 127], 50.00th=[ 129], 60.00th=[ 133], 00:11:19.449 | 70.00th=[ 135], 80.00th=[ 139], 90.00th=[ 143], 95.00th=[ 149], 00:11:19.449 | 99.00th=[ 159], 99.50th=[ 163], 99.90th=[ 206], 99.95th=[ 265], 00:11:19.449 | 99.99th=[ 461] 00:11:19.449 bw ( KiB/s): min=12288, max=12288, per=33.57%, avg=12288.00, stdev= 0.00, samples=1 00:11:19.449 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:11:19.449 lat (usec) : 250=99.93%, 500=0.07% 00:11:19.449 cpu : usr=2.50%, sys=9.70%, ctx=5731, majf=0, minf=12 00:11:19.449 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:19.449 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:19.449 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:19.449 issued rwts: total=2659,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:19.449 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:19.449 job2: (groupid=0, jobs=1): err= 0: pid=77098: Mon Jul 15 11:29:56 2024 00:11:19.449 read: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec) 00:11:19.449 slat (nsec): min=16174, max=96883, avg=28794.07, stdev=7903.87 00:11:19.449 clat (usec): min=184, max=1102, avg=332.65, stdev=58.88 00:11:19.449 lat (usec): min=203, max=1130, avg=361.45, stdev=61.49 00:11:19.449 clat percentiles (usec): 00:11:19.449 | 1.00th=[ 202], 5.00th=[ 262], 10.00th=[ 277], 20.00th=[ 289], 00:11:19.449 | 30.00th=[ 306], 40.00th=[ 318], 50.00th=[ 330], 60.00th=[ 343], 00:11:19.449 | 70.00th=[ 355], 80.00th=[ 371], 90.00th=[ 396], 95.00th=[ 424], 00:11:19.449 | 99.00th=[ 461], 99.50th=[ 474], 99.90th=[ 1090], 99.95th=[ 1106], 00:11:19.449 | 99.99th=[ 1106] 00:11:19.449 write: IOPS=1626, BW=6505KiB/s (6662kB/s)(6512KiB/1001msec); 0 zone resets 00:11:19.449 slat (nsec): min=21227, max=90264, avg=34433.60, stdev=9094.62 00:11:19.449 clat (usec): min=116, max=798, avg=232.77, stdev=49.18 00:11:19.449 lat (usec): min=143, max=824, avg=267.21, stdev=50.76 00:11:19.449 clat percentiles (usec): 00:11:19.449 | 1.00th=[ 139], 5.00th=[ 167], 10.00th=[ 190], 20.00th=[ 210], 00:11:19.449 | 30.00th=[ 217], 40.00th=[ 223], 50.00th=[ 229], 60.00th=[ 235], 00:11:19.449 | 70.00th=[ 245], 80.00th=[ 253], 90.00th=[ 269], 95.00th=[ 285], 00:11:19.449 | 99.00th=[ 371], 99.50th=[ 603], 99.90th=[ 676], 99.95th=[ 799], 00:11:19.449 | 99.99th=[ 799] 00:11:19.449 bw ( KiB/s): min= 8192, max= 8192, per=22.38%, avg=8192.00, stdev= 0.00, samples=1 00:11:19.450 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:11:19.450 lat (usec) : 250=41.02%, 500=58.47%, 750=0.35%, 1000=0.09% 00:11:19.450 lat (msec) : 2=0.06% 00:11:19.450 cpu : usr=1.90%, sys=7.80%, ctx=3167, majf=0, minf=11 00:11:19.450 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:19.450 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:19.450 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:19.450 issued rwts: total=1536,1628,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:19.450 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:19.450 job3: (groupid=0, jobs=1): err= 0: pid=77099: Mon Jul 15 11:29:56 2024 00:11:19.450 read: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec) 00:11:19.450 slat (nsec): min=14144, max=53778, avg=21053.73, stdev=6043.88 00:11:19.450 clat (usec): min=152, max=296, avg=177.17, stdev=14.45 00:11:19.450 lat (usec): min=168, max=334, avg=198.22, stdev=17.15 00:11:19.450 clat percentiles (usec): 00:11:19.450 | 1.00th=[ 157], 5.00th=[ 161], 10.00th=[ 163], 20.00th=[ 167], 00:11:19.450 | 30.00th=[ 169], 40.00th=[ 174], 50.00th=[ 176], 60.00th=[ 178], 00:11:19.450 | 70.00th=[ 182], 80.00th=[ 186], 90.00th=[ 194], 95.00th=[ 200], 00:11:19.450 | 99.00th=[ 245], 99.50th=[ 255], 99.90th=[ 269], 99.95th=[ 285], 00:11:19.450 | 99.99th=[ 297] 00:11:19.450 write: IOPS=2920, BW=11.4MiB/s (12.0MB/s)(11.4MiB/1001msec); 0 zone resets 00:11:19.450 slat (usec): min=20, max=104, avg=29.81, stdev= 7.97 00:11:19.450 clat (usec): min=111, max=1726, avg=134.49, stdev=33.13 00:11:19.450 lat (usec): min=132, max=1749, avg=164.31, stdev=34.84 00:11:19.450 clat percentiles (usec): 00:11:19.450 | 1.00th=[ 116], 5.00th=[ 119], 10.00th=[ 121], 20.00th=[ 125], 00:11:19.450 | 30.00th=[ 127], 40.00th=[ 130], 50.00th=[ 133], 60.00th=[ 135], 00:11:19.450 | 70.00th=[ 139], 80.00th=[ 143], 90.00th=[ 149], 95.00th=[ 153], 00:11:19.450 | 99.00th=[ 169], 99.50th=[ 188], 99.90th=[ 347], 99.95th=[ 603], 00:11:19.450 | 99.99th=[ 1729] 00:11:19.450 bw ( KiB/s): min=11088, max=12288, per=31.93%, avg=11688.00, stdev=848.53, samples=2 00:11:19.450 iops : min= 2772, max= 3072, avg=2922.00, stdev=212.13, samples=2 00:11:19.450 lat (usec) : 250=99.62%, 500=0.35%, 750=0.02% 00:11:19.450 lat (msec) : 2=0.02% 00:11:19.450 cpu : usr=2.30%, sys=10.90%, ctx=5484, majf=0, minf=7 00:11:19.450 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:19.450 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:19.450 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:19.450 issued rwts: total=2560,2923,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:19.450 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:19.450 00:11:19.450 Run status group 0 (all jobs): 00:11:19.450 READ: bw=32.0MiB/s (33.5MB/s), 5746KiB/s-10.4MiB/s (5884kB/s-10.9MB/s), io=32.0MiB (33.6MB), run=1001-1001msec 00:11:19.450 WRITE: bw=35.7MiB/s (37.5MB/s), 6138KiB/s-12.0MiB/s (6285kB/s-12.6MB/s), io=35.8MiB (37.5MB), run=1001-1001msec 00:11:19.450 00:11:19.450 Disk stats (read/write): 00:11:19.450 nvme0n1: ios=1105/1536, merge=0/0, ticks=421/388, in_queue=809, util=87.68% 00:11:19.450 nvme0n2: ios=2311/2560, merge=0/0, ticks=414/355, in_queue=769, util=86.59% 00:11:19.450 nvme0n3: ios=1207/1536, merge=0/0, ticks=407/379, in_queue=786, util=88.62% 00:11:19.450 nvme0n4: ios=2139/2560, merge=0/0, ticks=390/378, in_queue=768, util=89.56% 00:11:19.450 11:29:56 nvmf_tcp.nvmf_fio_target -- target/fio.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:11:19.450 [global] 00:11:19.450 thread=1 00:11:19.450 invalidate=1 00:11:19.450 rw=randwrite 00:11:19.450 time_based=1 00:11:19.450 runtime=1 00:11:19.450 ioengine=libaio 00:11:19.450 direct=1 00:11:19.450 bs=4096 00:11:19.450 iodepth=1 00:11:19.450 norandommap=0 00:11:19.450 numjobs=1 00:11:19.450 00:11:19.450 verify_dump=1 00:11:19.450 verify_backlog=512 00:11:19.450 verify_state_save=0 00:11:19.450 do_verify=1 00:11:19.450 verify=crc32c-intel 00:11:19.450 [job0] 00:11:19.450 filename=/dev/nvme0n1 00:11:19.450 [job1] 00:11:19.450 filename=/dev/nvme0n2 00:11:19.450 [job2] 00:11:19.450 filename=/dev/nvme0n3 00:11:19.450 [job3] 00:11:19.450 filename=/dev/nvme0n4 00:11:19.450 Could not set queue depth (nvme0n1) 00:11:19.450 Could not set queue depth (nvme0n2) 00:11:19.450 Could not set queue depth (nvme0n3) 00:11:19.450 Could not set queue depth (nvme0n4) 00:11:19.708 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:19.708 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:19.708 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:19.708 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:19.708 fio-3.35 00:11:19.708 Starting 4 threads 00:11:21.087 00:11:21.087 job0: (groupid=0, jobs=1): err= 0: pid=77152: Mon Jul 15 11:29:58 2024 00:11:21.087 read: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec) 00:11:21.087 slat (nsec): min=14540, max=56676, avg=18397.33, stdev=4631.04 00:11:21.087 clat (usec): min=150, max=419, avg=193.51, stdev=32.29 00:11:21.087 lat (usec): min=166, max=435, avg=211.90, stdev=32.72 00:11:21.087 clat percentiles (usec): 00:11:21.087 | 1.00th=[ 157], 5.00th=[ 161], 10.00th=[ 165], 20.00th=[ 169], 00:11:21.087 | 30.00th=[ 174], 40.00th=[ 178], 50.00th=[ 182], 60.00th=[ 188], 00:11:21.087 | 70.00th=[ 196], 80.00th=[ 223], 90.00th=[ 247], 95.00th=[ 262], 00:11:21.087 | 99.00th=[ 285], 99.50th=[ 293], 99.90th=[ 343], 99.95th=[ 375], 00:11:21.087 | 99.99th=[ 420] 00:11:21.087 write: IOPS=2679, BW=10.5MiB/s (11.0MB/s)(10.5MiB/1001msec); 0 zone resets 00:11:21.087 slat (usec): min=20, max=142, avg=27.11, stdev=10.22 00:11:21.087 clat (usec): min=62, max=768, avg=139.44, stdev=24.14 00:11:21.087 lat (usec): min=131, max=803, avg=166.55, stdev=27.90 00:11:21.087 clat percentiles (usec): 00:11:21.087 | 1.00th=[ 115], 5.00th=[ 119], 10.00th=[ 122], 20.00th=[ 126], 00:11:21.087 | 30.00th=[ 129], 40.00th=[ 131], 50.00th=[ 135], 60.00th=[ 139], 00:11:21.087 | 70.00th=[ 143], 80.00th=[ 149], 90.00th=[ 163], 95.00th=[ 186], 00:11:21.087 | 99.00th=[ 210], 99.50th=[ 217], 99.90th=[ 229], 99.95th=[ 523], 00:11:21.087 | 99.99th=[ 766] 00:11:21.087 bw ( KiB/s): min=12288, max=12288, per=31.71%, avg=12288.00, stdev= 0.00, samples=1 00:11:21.087 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:11:21.087 lat (usec) : 100=0.13%, 250=95.52%, 500=4.31%, 750=0.02%, 1000=0.02% 00:11:21.087 cpu : usr=2.30%, sys=8.80%, ctx=5265, majf=0, minf=6 00:11:21.087 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:21.087 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:21.087 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:21.087 issued rwts: total=2560,2682,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:21.087 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:21.087 job1: (groupid=0, jobs=1): err= 0: pid=77153: Mon Jul 15 11:29:58 2024 00:11:21.087 read: IOPS=1584, BW=6338KiB/s (6490kB/s)(6344KiB/1001msec) 00:11:21.087 slat (nsec): min=11891, max=34477, avg=13916.62, stdev=2676.30 00:11:21.087 clat (usec): min=252, max=933, avg=286.85, stdev=32.84 00:11:21.087 lat (usec): min=269, max=946, avg=300.77, stdev=32.73 00:11:21.087 clat percentiles (usec): 00:11:21.087 | 1.00th=[ 260], 5.00th=[ 265], 10.00th=[ 269], 20.00th=[ 273], 00:11:21.087 | 30.00th=[ 277], 40.00th=[ 281], 50.00th=[ 285], 60.00th=[ 289], 00:11:21.087 | 70.00th=[ 293], 80.00th=[ 297], 90.00th=[ 302], 95.00th=[ 310], 00:11:21.087 | 99.00th=[ 322], 99.50th=[ 408], 99.90th=[ 922], 99.95th=[ 930], 00:11:21.087 | 99.99th=[ 930] 00:11:21.087 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:11:21.087 slat (usec): min=14, max=101, avg=26.26, stdev= 7.98 00:11:21.087 clat (usec): min=85, max=1706, avg=225.49, stdev=48.64 00:11:21.087 lat (usec): min=142, max=1729, avg=251.75, stdev=50.77 00:11:21.087 clat percentiles (usec): 00:11:21.087 | 1.00th=[ 184], 5.00th=[ 196], 10.00th=[ 200], 20.00th=[ 206], 00:11:21.087 | 30.00th=[ 210], 40.00th=[ 215], 50.00th=[ 219], 60.00th=[ 223], 00:11:21.087 | 70.00th=[ 229], 80.00th=[ 237], 90.00th=[ 265], 95.00th=[ 277], 00:11:21.087 | 99.00th=[ 302], 99.50th=[ 310], 99.90th=[ 758], 99.95th=[ 889], 00:11:21.087 | 99.99th=[ 1713] 00:11:21.087 bw ( KiB/s): min= 8192, max= 8192, per=21.14%, avg=8192.00, stdev= 0.00, samples=1 00:11:21.087 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:11:21.087 lat (usec) : 100=0.03%, 250=48.32%, 500=51.35%, 750=0.14%, 1000=0.14% 00:11:21.087 lat (msec) : 2=0.03% 00:11:21.087 cpu : usr=1.60%, sys=5.90%, ctx=3639, majf=0, minf=15 00:11:21.087 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:21.087 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:21.087 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:21.087 issued rwts: total=1586,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:21.087 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:21.087 job2: (groupid=0, jobs=1): err= 0: pid=77154: Mon Jul 15 11:29:58 2024 00:11:21.087 read: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec) 00:11:21.087 slat (nsec): min=13843, max=72016, avg=20177.08, stdev=6197.40 00:11:21.087 clat (usec): min=153, max=273, avg=179.45, stdev=14.34 00:11:21.087 lat (usec): min=169, max=301, avg=199.63, stdev=16.99 00:11:21.087 clat percentiles (usec): 00:11:21.087 | 1.00th=[ 161], 5.00th=[ 163], 10.00th=[ 165], 20.00th=[ 169], 00:11:21.087 | 30.00th=[ 172], 40.00th=[ 176], 50.00th=[ 178], 60.00th=[ 180], 00:11:21.087 | 70.00th=[ 184], 80.00th=[ 188], 90.00th=[ 196], 95.00th=[ 204], 00:11:21.087 | 99.00th=[ 239], 99.50th=[ 249], 99.90th=[ 265], 99.95th=[ 265], 00:11:21.087 | 99.99th=[ 273] 00:11:21.087 write: IOPS=2916, BW=11.4MiB/s (11.9MB/s)(11.4MiB/1001msec); 0 zone resets 00:11:21.087 slat (nsec): min=19909, max=74439, avg=27078.36, stdev=6711.06 00:11:21.087 clat (usec): min=111, max=580, avg=136.56, stdev=18.04 00:11:21.087 lat (usec): min=134, max=604, avg=163.63, stdev=19.92 00:11:21.087 clat percentiles (usec): 00:11:21.087 | 1.00th=[ 118], 5.00th=[ 121], 10.00th=[ 123], 20.00th=[ 126], 00:11:21.087 | 30.00th=[ 128], 40.00th=[ 131], 50.00th=[ 135], 60.00th=[ 137], 00:11:21.087 | 70.00th=[ 141], 80.00th=[ 145], 90.00th=[ 153], 95.00th=[ 161], 00:11:21.087 | 99.00th=[ 190], 99.50th=[ 200], 99.90th=[ 338], 99.95th=[ 545], 00:11:21.087 | 99.99th=[ 578] 00:11:21.087 bw ( KiB/s): min=12288, max=12288, per=31.71%, avg=12288.00, stdev= 0.00, samples=1 00:11:21.087 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:11:21.087 lat (usec) : 250=99.74%, 500=0.22%, 750=0.04% 00:11:21.087 cpu : usr=2.70%, sys=9.50%, ctx=5479, majf=0, minf=11 00:11:21.087 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:21.087 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:21.087 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:21.087 issued rwts: total=2560,2919,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:21.087 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:21.087 job3: (groupid=0, jobs=1): err= 0: pid=77155: Mon Jul 15 11:29:58 2024 00:11:21.087 read: IOPS=1585, BW=6342KiB/s (6494kB/s)(6348KiB/1001msec) 00:11:21.087 slat (nsec): min=11564, max=38006, avg=16057.47, stdev=2656.03 00:11:21.087 clat (usec): min=155, max=940, avg=284.72, stdev=31.46 00:11:21.087 lat (usec): min=186, max=956, avg=300.78, stdev=31.43 00:11:21.087 clat percentiles (usec): 00:11:21.087 | 1.00th=[ 260], 5.00th=[ 265], 10.00th=[ 269], 20.00th=[ 273], 00:11:21.087 | 30.00th=[ 277], 40.00th=[ 281], 50.00th=[ 281], 60.00th=[ 285], 00:11:21.087 | 70.00th=[ 289], 80.00th=[ 293], 90.00th=[ 297], 95.00th=[ 306], 00:11:21.087 | 99.00th=[ 318], 99.50th=[ 392], 99.90th=[ 906], 99.95th=[ 938], 00:11:21.087 | 99.99th=[ 938] 00:11:21.088 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:11:21.088 slat (nsec): min=14693, max=81412, avg=26691.20, stdev=7687.80 00:11:21.088 clat (usec): min=122, max=1638, avg=225.15, stdev=46.08 00:11:21.088 lat (usec): min=155, max=1660, avg=251.84, stdev=48.48 00:11:21.088 clat percentiles (usec): 00:11:21.088 | 1.00th=[ 186], 5.00th=[ 196], 10.00th=[ 200], 20.00th=[ 206], 00:11:21.088 | 30.00th=[ 210], 40.00th=[ 215], 50.00th=[ 219], 60.00th=[ 223], 00:11:21.088 | 70.00th=[ 227], 80.00th=[ 237], 90.00th=[ 265], 95.00th=[ 277], 00:11:21.088 | 99.00th=[ 302], 99.50th=[ 314], 99.90th=[ 717], 99.95th=[ 824], 00:11:21.088 | 99.99th=[ 1631] 00:11:21.088 bw ( KiB/s): min= 8192, max= 8192, per=21.14%, avg=8192.00, stdev= 0.00, samples=1 00:11:21.088 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:11:21.088 lat (usec) : 250=48.47%, 500=51.20%, 750=0.22%, 1000=0.08% 00:11:21.088 lat (msec) : 2=0.03% 00:11:21.088 cpu : usr=1.30%, sys=6.20%, ctx=3637, majf=0, minf=13 00:11:21.088 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:21.088 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:21.088 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:21.088 issued rwts: total=1587,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:21.088 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:21.088 00:11:21.088 Run status group 0 (all jobs): 00:11:21.088 READ: bw=32.4MiB/s (33.9MB/s), 6338KiB/s-9.99MiB/s (6490kB/s-10.5MB/s), io=32.4MiB (34.0MB), run=1001-1001msec 00:11:21.088 WRITE: bw=37.8MiB/s (39.7MB/s), 8184KiB/s-11.4MiB/s (8380kB/s-11.9MB/s), io=37.9MiB (39.7MB), run=1001-1001msec 00:11:21.088 00:11:21.088 Disk stats (read/write): 00:11:21.088 nvme0n1: ios=2105/2560, merge=0/0, ticks=427/391, in_queue=818, util=88.28% 00:11:21.088 nvme0n2: ios=1544/1536, merge=0/0, ticks=458/377, in_queue=835, util=88.21% 00:11:21.088 nvme0n3: ios=2187/2560, merge=0/0, ticks=391/383, in_queue=774, util=88.98% 00:11:21.088 nvme0n4: ios=1511/1536, merge=0/0, ticks=438/378, in_queue=816, util=89.73% 00:11:21.088 11:29:58 nvmf_tcp.nvmf_fio_target -- target/fio.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:11:21.088 [global] 00:11:21.088 thread=1 00:11:21.088 invalidate=1 00:11:21.088 rw=write 00:11:21.088 time_based=1 00:11:21.088 runtime=1 00:11:21.088 ioengine=libaio 00:11:21.088 direct=1 00:11:21.088 bs=4096 00:11:21.088 iodepth=128 00:11:21.088 norandommap=0 00:11:21.088 numjobs=1 00:11:21.088 00:11:21.088 verify_dump=1 00:11:21.088 verify_backlog=512 00:11:21.088 verify_state_save=0 00:11:21.088 do_verify=1 00:11:21.088 verify=crc32c-intel 00:11:21.088 [job0] 00:11:21.088 filename=/dev/nvme0n1 00:11:21.088 [job1] 00:11:21.088 filename=/dev/nvme0n2 00:11:21.088 [job2] 00:11:21.088 filename=/dev/nvme0n3 00:11:21.088 [job3] 00:11:21.088 filename=/dev/nvme0n4 00:11:21.088 Could not set queue depth (nvme0n1) 00:11:21.088 Could not set queue depth (nvme0n2) 00:11:21.088 Could not set queue depth (nvme0n3) 00:11:21.088 Could not set queue depth (nvme0n4) 00:11:21.088 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:21.088 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:21.088 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:21.088 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:21.088 fio-3.35 00:11:21.088 Starting 4 threads 00:11:22.463 00:11:22.463 job0: (groupid=0, jobs=1): err= 0: pid=77215: Mon Jul 15 11:29:59 2024 00:11:22.463 read: IOPS=2371, BW=9488KiB/s (9715kB/s)(9516KiB/1003msec) 00:11:22.463 slat (usec): min=6, max=8045, avg=201.57, stdev=937.04 00:11:22.463 clat (usec): min=688, max=46514, avg=25144.82, stdev=4461.73 00:11:22.463 lat (usec): min=5724, max=46527, avg=25346.39, stdev=4421.94 00:11:22.463 clat percentiles (usec): 00:11:22.463 | 1.00th=[ 6128], 5.00th=[19006], 10.00th=[22414], 20.00th=[23200], 00:11:22.463 | 30.00th=[23725], 40.00th=[23987], 50.00th=[24773], 60.00th=[25822], 00:11:22.463 | 70.00th=[26346], 80.00th=[27395], 90.00th=[28967], 95.00th=[31065], 00:11:22.463 | 99.00th=[39584], 99.50th=[43254], 99.90th=[46400], 99.95th=[46400], 00:11:22.463 | 99.99th=[46400] 00:11:22.463 write: IOPS=2552, BW=9.97MiB/s (10.5MB/s)(10.0MiB/1003msec); 0 zone resets 00:11:22.463 slat (usec): min=12, max=5538, avg=196.39, stdev=811.62 00:11:22.463 clat (usec): min=14504, max=55168, avg=25983.47, stdev=8138.07 00:11:22.463 lat (usec): min=16213, max=55192, avg=26179.85, stdev=8158.48 00:11:22.463 clat percentiles (usec): 00:11:22.463 | 1.00th=[17695], 5.00th=[19268], 10.00th=[20317], 20.00th=[21890], 00:11:22.463 | 30.00th=[22152], 40.00th=[22414], 50.00th=[22414], 60.00th=[22676], 00:11:22.463 | 70.00th=[23200], 80.00th=[27132], 90.00th=[40109], 95.00th=[43779], 00:11:22.463 | 99.00th=[53740], 99.50th=[54789], 99.90th=[55313], 99.95th=[55313], 00:11:22.463 | 99.99th=[55313] 00:11:22.463 bw ( KiB/s): min= 9920, max=10560, per=15.39%, avg=10240.00, stdev=452.55, samples=2 00:11:22.463 iops : min= 2480, max= 2640, avg=2560.00, stdev=113.14, samples=2 00:11:22.463 lat (usec) : 750=0.02% 00:11:22.463 lat (msec) : 10=0.65%, 20=6.28%, 50=91.92%, 100=1.13% 00:11:22.463 cpu : usr=2.30%, sys=6.79%, ctx=236, majf=0, minf=13 00:11:22.463 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.7% 00:11:22.463 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:22.463 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:22.463 issued rwts: total=2379,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:22.463 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:22.463 job1: (groupid=0, jobs=1): err= 0: pid=77216: Mon Jul 15 11:29:59 2024 00:11:22.463 read: IOPS=5626, BW=22.0MiB/s (23.0MB/s)(22.0MiB/1001msec) 00:11:22.463 slat (usec): min=5, max=4492, avg=84.10, stdev=416.00 00:11:22.463 clat (usec): min=7946, max=14961, avg=11178.01, stdev=984.35 00:11:22.463 lat (usec): min=7968, max=15268, avg=11262.11, stdev=1008.13 00:11:22.463 clat percentiles (usec): 00:11:22.463 | 1.00th=[ 8717], 5.00th=[ 9372], 10.00th=[ 9896], 20.00th=[10683], 00:11:22.463 | 30.00th=[10814], 40.00th=[10945], 50.00th=[11207], 60.00th=[11338], 00:11:22.463 | 70.00th=[11600], 80.00th=[11863], 90.00th=[12387], 95.00th=[12780], 00:11:22.463 | 99.00th=[13960], 99.50th=[13960], 99.90th=[14222], 99.95th=[14353], 00:11:22.463 | 99.99th=[15008] 00:11:22.463 write: IOPS=5995, BW=23.4MiB/s (24.6MB/s)(23.4MiB/1001msec); 0 zone resets 00:11:22.463 slat (usec): min=9, max=3710, avg=80.21, stdev=367.64 00:11:22.463 clat (usec): min=391, max=15122, avg=10586.27, stdev=1380.23 00:11:22.463 lat (usec): min=432, max=15201, avg=10666.48, stdev=1363.13 00:11:22.463 clat percentiles (usec): 00:11:22.463 | 1.00th=[ 6980], 5.00th=[ 8291], 10.00th=[ 8586], 20.00th=[ 9503], 00:11:22.463 | 30.00th=[10421], 40.00th=[10683], 50.00th=[10945], 60.00th=[11076], 00:11:22.463 | 70.00th=[11207], 80.00th=[11469], 90.00th=[11863], 95.00th=[12518], 00:11:22.463 | 99.00th=[13304], 99.50th=[13435], 99.90th=[13960], 99.95th=[14091], 00:11:22.463 | 99.99th=[15139] 00:11:22.463 bw ( KiB/s): min=24576, max=24576, per=36.94%, avg=24576.00, stdev= 0.00, samples=1 00:11:22.464 iops : min= 6144, max= 6144, avg=6144.00, stdev= 0.00, samples=1 00:11:22.464 lat (usec) : 500=0.02% 00:11:22.464 lat (msec) : 4=0.24%, 10=17.20%, 20=82.54% 00:11:22.464 cpu : usr=5.10%, sys=15.80%, ctx=482, majf=0, minf=12 00:11:22.464 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:11:22.464 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:22.464 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:22.464 issued rwts: total=5632,6001,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:22.464 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:22.464 job2: (groupid=0, jobs=1): err= 0: pid=77217: Mon Jul 15 11:29:59 2024 00:11:22.464 read: IOPS=4598, BW=18.0MiB/s (18.8MB/s)(18.0MiB/1002msec) 00:11:22.464 slat (usec): min=6, max=3707, avg=101.12, stdev=469.68 00:11:22.464 clat (usec): min=10079, max=16078, avg=13517.36, stdev=816.54 00:11:22.464 lat (usec): min=10517, max=16388, avg=13618.47, stdev=692.29 00:11:22.464 clat percentiles (usec): 00:11:22.464 | 1.00th=[10683], 5.00th=[11731], 10.00th=[12780], 20.00th=[13173], 00:11:22.464 | 30.00th=[13435], 40.00th=[13435], 50.00th=[13566], 60.00th=[13698], 00:11:22.464 | 70.00th=[13698], 80.00th=[13960], 90.00th=[14353], 95.00th=[14746], 00:11:22.464 | 99.00th=[15533], 99.50th=[15926], 99.90th=[16057], 99.95th=[16057], 00:11:22.464 | 99.99th=[16057] 00:11:22.464 write: IOPS=5039, BW=19.7MiB/s (20.6MB/s)(19.7MiB/1002msec); 0 zone resets 00:11:22.464 slat (usec): min=9, max=3292, avg=97.95, stdev=409.65 00:11:22.464 clat (usec): min=255, max=15885, avg=12717.78, stdev=1665.38 00:11:22.464 lat (usec): min=2579, max=15910, avg=12815.73, stdev=1660.73 00:11:22.464 clat percentiles (usec): 00:11:22.464 | 1.00th=[ 6718], 5.00th=[10814], 10.00th=[11076], 20.00th=[11469], 00:11:22.464 | 30.00th=[11600], 40.00th=[12125], 50.00th=[12911], 60.00th=[13435], 00:11:22.464 | 70.00th=[13829], 80.00th=[14222], 90.00th=[14484], 95.00th=[14877], 00:11:22.464 | 99.00th=[15533], 99.50th=[15664], 99.90th=[15795], 99.95th=[15926], 00:11:22.464 | 99.99th=[15926] 00:11:22.464 bw ( KiB/s): min=18896, max=20480, per=29.59%, avg=19688.00, stdev=1120.06, samples=2 00:11:22.464 iops : min= 4724, max= 5120, avg=4922.00, stdev=280.01, samples=2 00:11:22.464 lat (usec) : 500=0.01% 00:11:22.464 lat (msec) : 4=0.38%, 10=0.45%, 20=99.16% 00:11:22.464 cpu : usr=5.69%, sys=12.99%, ctx=515, majf=0, minf=3 00:11:22.464 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:11:22.464 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:22.464 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:22.464 issued rwts: total=4608,5050,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:22.464 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:22.464 job3: (groupid=0, jobs=1): err= 0: pid=77218: Mon Jul 15 11:29:59 2024 00:11:22.464 read: IOPS=2954, BW=11.5MiB/s (12.1MB/s)(11.6MiB/1003msec) 00:11:22.464 slat (usec): min=6, max=5897, avg=171.33, stdev=755.89 00:11:22.464 clat (usec): min=673, max=31392, avg=21510.72, stdev=3465.36 00:11:22.464 lat (usec): min=4774, max=31418, avg=21682.05, stdev=3433.16 00:11:22.464 clat percentiles (usec): 00:11:22.464 | 1.00th=[ 6259], 5.00th=[16581], 10.00th=[17957], 20.00th=[18482], 00:11:22.464 | 30.00th=[19792], 40.00th=[21365], 50.00th=[21890], 60.00th=[23200], 00:11:22.464 | 70.00th=[23725], 80.00th=[23987], 90.00th=[24773], 95.00th=[25560], 00:11:22.464 | 99.00th=[28705], 99.50th=[31327], 99.90th=[31327], 99.95th=[31327], 00:11:22.464 | 99.99th=[31327] 00:11:22.464 write: IOPS=3062, BW=12.0MiB/s (12.5MB/s)(12.0MiB/1003msec); 0 zone resets 00:11:22.464 slat (usec): min=15, max=5311, avg=152.13, stdev=685.80 00:11:22.464 clat (usec): min=11884, max=31841, avg=20328.68, stdev=3501.37 00:11:22.464 lat (usec): min=13642, max=31870, avg=20480.81, stdev=3457.29 00:11:22.464 clat percentiles (usec): 00:11:22.464 | 1.00th=[14091], 5.00th=[15008], 10.00th=[15533], 20.00th=[16319], 00:11:22.464 | 30.00th=[17433], 40.00th=[19792], 50.00th=[21890], 60.00th=[22152], 00:11:22.464 | 70.00th=[22414], 80.00th=[22676], 90.00th=[23462], 95.00th=[25822], 00:11:22.464 | 99.00th=[28443], 99.50th=[28967], 99.90th=[31851], 99.95th=[31851], 00:11:22.464 | 99.99th=[31851] 00:11:22.464 bw ( KiB/s): min=12288, max=12288, per=18.47%, avg=12288.00, stdev= 0.00, samples=2 00:11:22.464 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=2 00:11:22.464 lat (usec) : 750=0.02% 00:11:22.464 lat (msec) : 10=0.53%, 20=35.29%, 50=64.16% 00:11:22.464 cpu : usr=2.50%, sys=10.48%, ctx=266, majf=0, minf=13 00:11:22.464 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:11:22.464 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:22.464 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:22.464 issued rwts: total=2963,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:22.464 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:22.464 00:11:22.464 Run status group 0 (all jobs): 00:11:22.464 READ: bw=60.7MiB/s (63.6MB/s), 9488KiB/s-22.0MiB/s (9715kB/s-23.0MB/s), io=60.9MiB (63.8MB), run=1001-1003msec 00:11:22.464 WRITE: bw=65.0MiB/s (68.1MB/s), 9.97MiB/s-23.4MiB/s (10.5MB/s-24.6MB/s), io=65.2MiB (68.3MB), run=1001-1003msec 00:11:22.464 00:11:22.464 Disk stats (read/write): 00:11:22.464 nvme0n1: ios=2098/2122, merge=0/0, ticks=13093/13142, in_queue=26235, util=89.88% 00:11:22.464 nvme0n2: ios=4999/5120, merge=0/0, ticks=16541/15191, in_queue=31732, util=89.10% 00:11:22.464 nvme0n3: ios=4129/4281, merge=0/0, ticks=12690/11980, in_queue=24670, util=90.27% 00:11:22.464 nvme0n4: ios=2581/2768, merge=0/0, ticks=13537/11966, in_queue=25503, util=90.32% 00:11:22.464 11:29:59 nvmf_tcp.nvmf_fio_target -- target/fio.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:11:22.464 [global] 00:11:22.464 thread=1 00:11:22.464 invalidate=1 00:11:22.464 rw=randwrite 00:11:22.464 time_based=1 00:11:22.464 runtime=1 00:11:22.464 ioengine=libaio 00:11:22.464 direct=1 00:11:22.464 bs=4096 00:11:22.464 iodepth=128 00:11:22.464 norandommap=0 00:11:22.464 numjobs=1 00:11:22.464 00:11:22.464 verify_dump=1 00:11:22.464 verify_backlog=512 00:11:22.464 verify_state_save=0 00:11:22.464 do_verify=1 00:11:22.464 verify=crc32c-intel 00:11:22.464 [job0] 00:11:22.464 filename=/dev/nvme0n1 00:11:22.464 [job1] 00:11:22.464 filename=/dev/nvme0n2 00:11:22.464 [job2] 00:11:22.464 filename=/dev/nvme0n3 00:11:22.464 [job3] 00:11:22.464 filename=/dev/nvme0n4 00:11:22.464 Could not set queue depth (nvme0n1) 00:11:22.464 Could not set queue depth (nvme0n2) 00:11:22.464 Could not set queue depth (nvme0n3) 00:11:22.464 Could not set queue depth (nvme0n4) 00:11:22.464 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:22.464 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:22.464 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:22.464 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:22.464 fio-3.35 00:11:22.464 Starting 4 threads 00:11:23.839 00:11:23.839 job0: (groupid=0, jobs=1): err= 0: pid=77271: Mon Jul 15 11:30:00 2024 00:11:23.839 read: IOPS=5581, BW=21.8MiB/s (22.9MB/s)(22.0MiB/1009msec) 00:11:23.839 slat (usec): min=5, max=11018, avg=93.96, stdev=587.54 00:11:23.839 clat (usec): min=4541, max=23462, avg=11869.01, stdev=3042.66 00:11:23.839 lat (usec): min=4552, max=23480, avg=11962.96, stdev=3071.90 00:11:23.839 clat percentiles (usec): 00:11:23.839 | 1.00th=[ 5473], 5.00th=[ 8586], 10.00th=[ 8979], 20.00th=[ 9765], 00:11:23.839 | 30.00th=[10421], 40.00th=[10683], 50.00th=[10814], 60.00th=[11338], 00:11:23.839 | 70.00th=[12518], 80.00th=[13829], 90.00th=[16581], 95.00th=[18744], 00:11:23.839 | 99.00th=[20841], 99.50th=[21890], 99.90th=[23200], 99.95th=[23462], 00:11:23.839 | 99.99th=[23462] 00:11:23.839 write: IOPS=5857, BW=22.9MiB/s (24.0MB/s)(23.1MiB/1009msec); 0 zone resets 00:11:23.839 slat (usec): min=4, max=9006, avg=71.91, stdev=308.38 00:11:23.839 clat (usec): min=3926, max=23387, avg=10300.80, stdev=2337.27 00:11:23.839 lat (usec): min=3941, max=23397, avg=10372.71, stdev=2358.46 00:11:23.839 clat percentiles (usec): 00:11:23.839 | 1.00th=[ 4555], 5.00th=[ 5407], 10.00th=[ 6194], 20.00th=[ 8717], 00:11:23.839 | 30.00th=[10159], 40.00th=[10945], 50.00th=[11207], 60.00th=[11338], 00:11:23.839 | 70.00th=[11469], 80.00th=[11600], 90.00th=[11863], 95.00th=[11994], 00:11:23.839 | 99.00th=[17171], 99.50th=[19530], 99.90th=[20579], 99.95th=[21103], 00:11:23.839 | 99.99th=[23462] 00:11:23.839 bw ( KiB/s): min=21704, max=24560, per=35.30%, avg=23132.00, stdev=2019.50, samples=2 00:11:23.839 iops : min= 5426, max= 6140, avg=5783.00, stdev=504.87, samples=2 00:11:23.839 lat (msec) : 4=0.03%, 10=25.57%, 20=73.11%, 50=1.29% 00:11:23.839 cpu : usr=5.36%, sys=13.59%, ctx=861, majf=0, minf=15 00:11:23.839 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:11:23.839 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:23.839 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:23.839 issued rwts: total=5632,5910,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:23.839 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:23.839 job1: (groupid=0, jobs=1): err= 0: pid=77272: Mon Jul 15 11:30:00 2024 00:11:23.839 read: IOPS=2607, BW=10.2MiB/s (10.7MB/s)(10.4MiB/1018msec) 00:11:23.839 slat (usec): min=3, max=20949, avg=196.28, stdev=1347.20 00:11:23.839 clat (usec): min=5981, max=80246, avg=23195.41, stdev=9978.30 00:11:23.839 lat (usec): min=5999, max=80256, avg=23391.69, stdev=10088.33 00:11:23.840 clat percentiles (usec): 00:11:23.840 | 1.00th=[ 8717], 5.00th=[10552], 10.00th=[12387], 20.00th=[14222], 00:11:23.840 | 30.00th=[19530], 40.00th=[22676], 50.00th=[23200], 60.00th=[23725], 00:11:23.840 | 70.00th=[23987], 80.00th=[26608], 90.00th=[34866], 95.00th=[40109], 00:11:23.840 | 99.00th=[66323], 99.50th=[77071], 99.90th=[80217], 99.95th=[80217], 00:11:23.840 | 99.99th=[80217] 00:11:23.840 write: IOPS=3017, BW=11.8MiB/s (12.4MB/s)(12.0MiB/1018msec); 0 zone resets 00:11:23.840 slat (usec): min=5, max=29275, avg=147.91, stdev=906.54 00:11:23.840 clat (usec): min=4060, max=86357, avg=21918.01, stdev=10478.91 00:11:23.840 lat (usec): min=4089, max=86368, avg=22065.92, stdev=10528.57 00:11:23.840 clat percentiles (usec): 00:11:23.840 | 1.00th=[ 6128], 5.00th=[10028], 10.00th=[10421], 20.00th=[11338], 00:11:23.840 | 30.00th=[19792], 40.00th=[22676], 50.00th=[23725], 60.00th=[24249], 00:11:23.840 | 70.00th=[24511], 80.00th=[24773], 90.00th=[25822], 95.00th=[33424], 00:11:23.840 | 99.00th=[80217], 99.50th=[83362], 99.90th=[86508], 99.95th=[86508], 00:11:23.840 | 99.99th=[86508] 00:11:23.840 bw ( KiB/s): min=12032, max=12280, per=18.55%, avg=12156.00, stdev=175.36, samples=2 00:11:23.840 iops : min= 3008, max= 3070, avg=3039.00, stdev=43.84, samples=2 00:11:23.840 lat (msec) : 10=3.93%, 20=26.41%, 50=67.53%, 100=2.13% 00:11:23.840 cpu : usr=3.34%, sys=6.98%, ctx=351, majf=0, minf=13 00:11:23.840 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:11:23.840 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:23.840 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:23.840 issued rwts: total=2654,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:23.840 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:23.840 job2: (groupid=0, jobs=1): err= 0: pid=77273: Mon Jul 15 11:30:00 2024 00:11:23.840 read: IOPS=2490, BW=9961KiB/s (10.2MB/s)(10.0MiB/1028msec) 00:11:23.840 slat (usec): min=6, max=22678, avg=196.83, stdev=1388.20 00:11:23.840 clat (usec): min=9059, max=48422, avg=24077.27, stdev=7301.75 00:11:23.840 lat (usec): min=9076, max=48457, avg=24274.10, stdev=7391.15 00:11:23.840 clat percentiles (usec): 00:11:23.840 | 1.00th=[10028], 5.00th=[13042], 10.00th=[13698], 20.00th=[19006], 00:11:23.840 | 30.00th=[21890], 40.00th=[22676], 50.00th=[22938], 60.00th=[23987], 00:11:23.840 | 70.00th=[26346], 80.00th=[31065], 90.00th=[33817], 95.00th=[36963], 00:11:23.840 | 99.00th=[43254], 99.50th=[43779], 99.90th=[45351], 99.95th=[45351], 00:11:23.840 | 99.99th=[48497] 00:11:23.840 write: IOPS=2611, BW=10.2MiB/s (10.7MB/s)(10.5MiB/1028msec); 0 zone resets 00:11:23.840 slat (usec): min=4, max=18110, avg=176.09, stdev=884.40 00:11:23.840 clat (usec): min=3733, max=98624, avg=25555.15, stdev=13740.72 00:11:23.840 lat (usec): min=3767, max=98634, avg=25731.24, stdev=13821.94 00:11:23.840 clat percentiles (usec): 00:11:23.840 | 1.00th=[ 6718], 5.00th=[10290], 10.00th=[13435], 20.00th=[21365], 00:11:23.840 | 30.00th=[22938], 40.00th=[23725], 50.00th=[24249], 60.00th=[24511], 00:11:23.840 | 70.00th=[24773], 80.00th=[25297], 90.00th=[30802], 95.00th=[53740], 00:11:23.840 | 99.00th=[92799], 99.50th=[96994], 99.90th=[99091], 99.95th=[99091], 00:11:23.840 | 99.99th=[99091] 00:11:23.840 bw ( KiB/s): min= 8264, max=12247, per=15.65%, avg=10255.50, stdev=2816.41, samples=2 00:11:23.840 iops : min= 2066, max= 3061, avg=2563.50, stdev=703.57, samples=2 00:11:23.840 lat (msec) : 4=0.10%, 10=2.23%, 20=16.89%, 50=78.06%, 100=2.73% 00:11:23.840 cpu : usr=3.02%, sys=7.01%, ctx=342, majf=0, minf=11 00:11:23.840 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:11:23.840 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:23.840 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:23.840 issued rwts: total=2560,2685,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:23.840 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:23.840 job3: (groupid=0, jobs=1): err= 0: pid=77274: Mon Jul 15 11:30:00 2024 00:11:23.840 read: IOPS=5109, BW=20.0MiB/s (20.9MB/s)(20.0MiB/1002msec) 00:11:23.840 slat (usec): min=6, max=5893, avg=95.12, stdev=447.89 00:11:23.840 clat (usec): min=7512, max=18428, avg=12589.48, stdev=1345.45 00:11:23.840 lat (usec): min=7525, max=18467, avg=12684.60, stdev=1383.45 00:11:23.840 clat percentiles (usec): 00:11:23.840 | 1.00th=[ 9110], 5.00th=[10421], 10.00th=[11207], 20.00th=[11863], 00:11:23.840 | 30.00th=[12125], 40.00th=[12256], 50.00th=[12387], 60.00th=[12518], 00:11:23.840 | 70.00th=[12780], 80.00th=[13566], 90.00th=[14353], 95.00th=[15008], 00:11:23.840 | 99.00th=[16581], 99.50th=[17171], 99.90th=[17957], 99.95th=[17957], 00:11:23.840 | 99.99th=[18482] 00:11:23.840 write: IOPS=5165, BW=20.2MiB/s (21.2MB/s)(20.2MiB/1002msec); 0 zone resets 00:11:23.840 slat (usec): min=11, max=4894, avg=90.92, stdev=446.90 00:11:23.840 clat (usec): min=519, max=18069, avg=12031.92, stdev=1396.30 00:11:23.840 lat (usec): min=4058, max=18612, avg=12122.84, stdev=1447.18 00:11:23.840 clat percentiles (usec): 00:11:23.840 | 1.00th=[ 7308], 5.00th=[10290], 10.00th=[11076], 20.00th=[11600], 00:11:23.840 | 30.00th=[11731], 40.00th=[11863], 50.00th=[11994], 60.00th=[12125], 00:11:23.840 | 70.00th=[12387], 80.00th=[12649], 90.00th=[13042], 95.00th=[14222], 00:11:23.840 | 99.00th=[16712], 99.50th=[17433], 99.90th=[17957], 99.95th=[17957], 00:11:23.840 | 99.99th=[17957] 00:11:23.840 bw ( KiB/s): min=20480, max=20480, per=31.25%, avg=20480.00, stdev= 0.00, samples=2 00:11:23.840 iops : min= 5120, max= 5120, avg=5120.00, stdev= 0.00, samples=2 00:11:23.840 lat (usec) : 750=0.01% 00:11:23.840 lat (msec) : 10=3.84%, 20=96.15% 00:11:23.840 cpu : usr=5.39%, sys=14.09%, ctx=545, majf=0, minf=8 00:11:23.840 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:11:23.840 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:23.840 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:23.840 issued rwts: total=5120,5176,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:23.840 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:23.840 00:11:23.840 Run status group 0 (all jobs): 00:11:23.840 READ: bw=60.7MiB/s (63.6MB/s), 9961KiB/s-21.8MiB/s (10.2MB/s-22.9MB/s), io=62.4MiB (65.4MB), run=1002-1028msec 00:11:23.840 WRITE: bw=64.0MiB/s (67.1MB/s), 10.2MiB/s-22.9MiB/s (10.7MB/s-24.0MB/s), io=65.8MiB (69.0MB), run=1002-1028msec 00:11:23.840 00:11:23.840 Disk stats (read/write): 00:11:23.840 nvme0n1: ios=4658/5120, merge=0/0, ticks=50789/50764, in_queue=101553, util=88.58% 00:11:23.840 nvme0n2: ios=2460/2560, merge=0/0, ticks=53178/50691, in_queue=103869, util=87.91% 00:11:23.840 nvme0n3: ios=2054/2143, merge=0/0, ticks=50985/48229, in_queue=99214, util=89.29% 00:11:23.840 nvme0n4: ios=4164/4608, merge=0/0, ticks=24821/23739, in_queue=48560, util=89.76% 00:11:23.840 11:30:00 nvmf_tcp.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:11:23.840 11:30:00 nvmf_tcp.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=77294 00:11:23.840 11:30:00 nvmf_tcp.nvmf_fio_target -- target/fio.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:11:23.840 11:30:00 nvmf_tcp.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:11:23.840 [global] 00:11:23.840 thread=1 00:11:23.840 invalidate=1 00:11:23.840 rw=read 00:11:23.840 time_based=1 00:11:23.840 runtime=10 00:11:23.840 ioengine=libaio 00:11:23.840 direct=1 00:11:23.840 bs=4096 00:11:23.840 iodepth=1 00:11:23.840 norandommap=1 00:11:23.840 numjobs=1 00:11:23.840 00:11:23.840 [job0] 00:11:23.840 filename=/dev/nvme0n1 00:11:23.840 [job1] 00:11:23.840 filename=/dev/nvme0n2 00:11:23.840 [job2] 00:11:23.840 filename=/dev/nvme0n3 00:11:23.840 [job3] 00:11:23.840 filename=/dev/nvme0n4 00:11:23.840 Could not set queue depth (nvme0n1) 00:11:23.840 Could not set queue depth (nvme0n2) 00:11:23.840 Could not set queue depth (nvme0n3) 00:11:23.840 Could not set queue depth (nvme0n4) 00:11:23.840 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:23.840 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:23.840 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:23.840 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:23.840 fio-3.35 00:11:23.840 Starting 4 threads 00:11:27.123 11:30:03 nvmf_tcp.nvmf_fio_target -- target/fio.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete concat0 00:11:27.123 fio: io_u error on file /dev/nvme0n4: Remote I/O error: read offset=35082240, buflen=4096 00:11:27.123 fio: pid=77343, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:11:27.123 11:30:04 nvmf_tcp.nvmf_fio_target -- target/fio.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete raid0 00:11:27.123 fio: pid=77342, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:11:27.123 fio: io_u error on file /dev/nvme0n3: Remote I/O error: read offset=38117376, buflen=4096 00:11:27.123 11:30:04 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:27.123 11:30:04 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:11:27.379 fio: pid=77334, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:11:27.379 fio: io_u error on file /dev/nvme0n1: Remote I/O error: read offset=52736000, buflen=4096 00:11:27.380 11:30:04 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:27.380 11:30:04 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:11:27.637 fio: pid=77336, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:11:27.637 fio: io_u error on file /dev/nvme0n2: Remote I/O error: read offset=58482688, buflen=4096 00:11:27.637 11:30:05 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:27.637 11:30:05 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:11:27.637 00:11:27.637 job0: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=77334: Mon Jul 15 11:30:05 2024 00:11:27.637 read: IOPS=3747, BW=14.6MiB/s (15.3MB/s)(50.3MiB/3436msec) 00:11:27.637 slat (usec): min=5, max=10776, avg=19.29, stdev=166.85 00:11:27.637 clat (usec): min=82, max=7831, avg=245.84, stdev=152.01 00:11:27.637 lat (usec): min=153, max=11091, avg=265.12, stdev=225.83 00:11:27.637 clat percentiles (usec): 00:11:27.637 | 1.00th=[ 149], 5.00th=[ 155], 10.00th=[ 159], 20.00th=[ 165], 00:11:27.637 | 30.00th=[ 178], 40.00th=[ 258], 50.00th=[ 265], 60.00th=[ 273], 00:11:27.637 | 70.00th=[ 277], 80.00th=[ 285], 90.00th=[ 297], 95.00th=[ 314], 00:11:27.637 | 99.00th=[ 383], 99.50th=[ 420], 99.90th=[ 1012], 99.95th=[ 3359], 00:11:27.637 | 99.99th=[ 7767] 00:11:27.637 bw ( KiB/s): min=13240, max=21768, per=31.44%, avg=15314.67, stdev=3321.65, samples=6 00:11:27.637 iops : min= 3310, max= 5442, avg=3828.67, stdev=830.41, samples=6 00:11:27.637 lat (usec) : 100=0.01%, 250=36.82%, 500=62.94%, 750=0.09%, 1000=0.04% 00:11:27.637 lat (msec) : 2=0.02%, 4=0.04%, 10=0.04% 00:11:27.637 cpu : usr=1.31%, sys=5.07%, ctx=13241, majf=0, minf=1 00:11:27.637 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:27.637 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:27.637 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:27.637 issued rwts: total=12876,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:27.637 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:27.637 job1: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=77336: Mon Jul 15 11:30:05 2024 00:11:27.637 read: IOPS=3862, BW=15.1MiB/s (15.8MB/s)(55.8MiB/3697msec) 00:11:27.637 slat (usec): min=7, max=22005, avg=20.35, stdev=239.82 00:11:27.637 clat (usec): min=125, max=14726, avg=236.81, stdev=143.07 00:11:27.637 lat (usec): min=149, max=23050, avg=257.16, stdev=283.50 00:11:27.637 clat percentiles (usec): 00:11:27.637 | 1.00th=[ 143], 5.00th=[ 151], 10.00th=[ 155], 20.00th=[ 163], 00:11:27.637 | 30.00th=[ 174], 40.00th=[ 239], 50.00th=[ 262], 60.00th=[ 269], 00:11:27.637 | 70.00th=[ 277], 80.00th=[ 281], 90.00th=[ 293], 95.00th=[ 314], 00:11:27.637 | 99.00th=[ 379], 99.50th=[ 412], 99.90th=[ 734], 99.95th=[ 873], 00:11:27.637 | 99.99th=[ 3589] 00:11:27.637 bw ( KiB/s): min=13392, max=21352, per=31.47%, avg=15330.86, stdev=2985.68, samples=7 00:11:27.637 iops : min= 3348, max= 5338, avg=3832.71, stdev=746.42, samples=7 00:11:27.637 lat (usec) : 250=42.96%, 500=56.83%, 750=0.11%, 1000=0.04% 00:11:27.637 lat (msec) : 2=0.02%, 4=0.02%, 20=0.01% 00:11:27.637 cpu : usr=1.38%, sys=5.25%, ctx=14461, majf=0, minf=1 00:11:27.637 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:27.637 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:27.637 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:27.637 issued rwts: total=14279,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:27.637 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:27.637 job2: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=77342: Mon Jul 15 11:30:05 2024 00:11:27.637 read: IOPS=2910, BW=11.4MiB/s (11.9MB/s)(36.4MiB/3198msec) 00:11:27.637 slat (usec): min=7, max=8700, avg=27.82, stdev=124.32 00:11:27.637 clat (usec): min=147, max=8305, avg=313.16, stdev=104.62 00:11:27.637 lat (usec): min=162, max=9056, avg=340.99, stdev=163.29 00:11:27.637 clat percentiles (usec): 00:11:27.637 | 1.00th=[ 165], 5.00th=[ 265], 10.00th=[ 277], 20.00th=[ 285], 00:11:27.637 | 30.00th=[ 293], 40.00th=[ 297], 50.00th=[ 306], 60.00th=[ 310], 00:11:27.637 | 70.00th=[ 318], 80.00th=[ 330], 90.00th=[ 367], 95.00th=[ 412], 00:11:27.637 | 99.00th=[ 478], 99.50th=[ 586], 99.90th=[ 988], 99.95th=[ 1139], 00:11:27.637 | 99.99th=[ 8291] 00:11:27.637 bw ( KiB/s): min= 9896, max=12472, per=24.07%, avg=11726.67, stdev=974.00, samples=6 00:11:27.637 iops : min= 2474, max= 3118, avg=2931.67, stdev=243.50, samples=6 00:11:27.637 lat (usec) : 250=3.59%, 500=95.61%, 750=0.53%, 1000=0.17% 00:11:27.637 lat (msec) : 2=0.08%, 4=0.01%, 10=0.01% 00:11:27.637 cpu : usr=1.47%, sys=6.19%, ctx=9492, majf=0, minf=1 00:11:27.637 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:27.637 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:27.637 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:27.637 issued rwts: total=9307,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:27.637 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:27.637 job3: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=77343: Mon Jul 15 11:30:05 2024 00:11:27.637 read: IOPS=2901, BW=11.3MiB/s (11.9MB/s)(33.5MiB/2952msec) 00:11:27.637 slat (usec): min=13, max=129, avg=19.82, stdev= 6.67 00:11:27.637 clat (usec): min=165, max=3332, avg=322.57, stdev=70.34 00:11:27.637 lat (usec): min=184, max=3363, avg=342.39, stdev=71.51 00:11:27.637 clat percentiles (usec): 00:11:27.637 | 1.00th=[ 273], 5.00th=[ 285], 10.00th=[ 289], 20.00th=[ 297], 00:11:27.637 | 30.00th=[ 302], 40.00th=[ 306], 50.00th=[ 310], 60.00th=[ 314], 00:11:27.637 | 70.00th=[ 318], 80.00th=[ 330], 90.00th=[ 379], 95.00th=[ 424], 00:11:27.637 | 99.00th=[ 478], 99.50th=[ 578], 99.90th=[ 906], 99.95th=[ 1713], 00:11:27.637 | 99.99th=[ 3326] 00:11:27.637 bw ( KiB/s): min=10120, max=12344, per=23.81%, avg=11598.40, stdev=928.66, samples=5 00:11:27.637 iops : min= 2530, max= 3086, avg=2899.60, stdev=232.17, samples=5 00:11:27.637 lat (usec) : 250=0.56%, 500=98.66%, 750=0.50%, 1000=0.19% 00:11:27.637 lat (msec) : 2=0.05%, 4=0.04% 00:11:27.637 cpu : usr=0.88%, sys=4.88%, ctx=8571, majf=0, minf=1 00:11:27.637 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:27.637 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:27.638 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:27.638 issued rwts: total=8566,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:27.638 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:27.638 00:11:27.638 Run status group 0 (all jobs): 00:11:27.638 READ: bw=47.6MiB/s (49.9MB/s), 11.3MiB/s-15.1MiB/s (11.9MB/s-15.8MB/s), io=176MiB (184MB), run=2952-3697msec 00:11:27.638 00:11:27.638 Disk stats (read/write): 00:11:27.638 nvme0n1: ios=12623/0, merge=0/0, ticks=3102/0, in_queue=3102, util=94.99% 00:11:27.638 nvme0n2: ios=13819/0, merge=0/0, ticks=3314/0, in_queue=3314, util=95.26% 00:11:27.638 nvme0n3: ios=9081/0, merge=0/0, ticks=2883/0, in_queue=2883, util=96.21% 00:11:27.638 nvme0n4: ios=8344/0, merge=0/0, ticks=2736/0, in_queue=2736, util=96.76% 00:11:27.895 11:30:05 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:27.895 11:30:05 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:11:28.153 11:30:05 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:28.153 11:30:05 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:11:28.411 11:30:05 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:28.411 11:30:05 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:11:28.669 11:30:06 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:28.669 11:30:06 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:11:28.948 11:30:06 nvmf_tcp.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:11:28.948 11:30:06 nvmf_tcp.nvmf_fio_target -- target/fio.sh@70 -- # wait 77294 00:11:28.948 11:30:06 nvmf_tcp.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:11:28.948 11:30:06 nvmf_tcp.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:28.948 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:28.948 11:30:06 nvmf_tcp.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:28.948 11:30:06 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1219 -- # local i=0 00:11:28.948 11:30:06 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:11:28.948 11:30:06 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:28.948 11:30:06 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:11:28.948 11:30:06 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:28.948 11:30:06 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1231 -- # return 0 00:11:28.948 11:30:06 nvmf_tcp.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:11:28.948 nvmf hotplug test: fio failed as expected 00:11:28.948 11:30:06 nvmf_tcp.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:11:28.948 11:30:06 nvmf_tcp.nvmf_fio_target -- target/fio.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:29.211 11:30:06 nvmf_tcp.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:11:29.211 11:30:06 nvmf_tcp.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:11:29.211 11:30:06 nvmf_tcp.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:11:29.211 11:30:06 nvmf_tcp.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:11:29.211 11:30:06 nvmf_tcp.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:11:29.211 11:30:06 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:29.211 11:30:06 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@117 -- # sync 00:11:29.211 11:30:06 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:29.211 11:30:06 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@120 -- # set +e 00:11:29.211 11:30:06 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:29.211 11:30:06 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:29.211 rmmod nvme_tcp 00:11:29.211 rmmod nvme_fabrics 00:11:29.211 rmmod nvme_keyring 00:11:29.211 11:30:06 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:29.211 11:30:06 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@124 -- # set -e 00:11:29.211 11:30:06 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@125 -- # return 0 00:11:29.211 11:30:06 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@489 -- # '[' -n 76810 ']' 00:11:29.211 11:30:06 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@490 -- # killprocess 76810 00:11:29.211 11:30:06 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@948 -- # '[' -z 76810 ']' 00:11:29.211 11:30:06 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@952 -- # kill -0 76810 00:11:29.211 11:30:06 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@953 -- # uname 00:11:29.211 11:30:06 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:11:29.211 11:30:06 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 76810 00:11:29.211 11:30:06 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:11:29.211 11:30:06 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:11:29.211 11:30:06 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 76810' 00:11:29.211 killing process with pid 76810 00:11:29.211 11:30:06 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@967 -- # kill 76810 00:11:29.211 11:30:06 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@972 -- # wait 76810 00:11:29.470 11:30:06 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:29.470 11:30:06 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:11:29.470 11:30:06 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:11:29.470 11:30:06 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:29.470 11:30:06 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:29.470 11:30:06 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:29.470 11:30:06 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:29.470 11:30:06 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:29.470 11:30:06 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:11:29.470 00:11:29.470 real 0m19.069s 00:11:29.470 user 1m13.358s 00:11:29.470 sys 0m8.772s 00:11:29.470 11:30:06 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:29.470 ************************************ 00:11:29.470 11:30:06 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:11:29.470 END TEST nvmf_fio_target 00:11:29.470 ************************************ 00:11:29.470 11:30:06 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:11:29.470 11:30:06 nvmf_tcp -- nvmf/nvmf.sh@56 -- # run_test nvmf_bdevio /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:11:29.470 11:30:06 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:11:29.470 11:30:06 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:29.470 11:30:06 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:11:29.470 ************************************ 00:11:29.470 START TEST nvmf_bdevio 00:11:29.470 ************************************ 00:11:29.470 11:30:06 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:11:29.729 * Looking for test storage... 00:11:29.729 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:11:29.729 11:30:06 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:11:29.729 11:30:06 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:11:29.729 11:30:07 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:29.729 11:30:07 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:29.729 11:30:07 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:29.729 11:30:07 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:29.729 11:30:07 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:29.729 11:30:07 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:29.729 11:30:07 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:29.729 11:30:07 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:29.729 11:30:07 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:29.729 11:30:07 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:29.729 11:30:07 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:891080d4-f96c-4735-b9e2-e3ce9892e421 00:11:29.729 11:30:07 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=891080d4-f96c-4735-b9e2-e3ce9892e421 00:11:29.729 11:30:07 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:29.729 11:30:07 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:29.729 11:30:07 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:11:29.729 11:30:07 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:29.729 11:30:07 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:29.729 11:30:07 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:29.729 11:30:07 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:29.729 11:30:07 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:29.729 11:30:07 nvmf_tcp.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:29.729 11:30:07 nvmf_tcp.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:29.729 11:30:07 nvmf_tcp.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:29.729 11:30:07 nvmf_tcp.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:11:29.730 11:30:07 nvmf_tcp.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:29.730 11:30:07 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@47 -- # : 0 00:11:29.730 11:30:07 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:29.730 11:30:07 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:29.730 11:30:07 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:29.730 11:30:07 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:29.730 11:30:07 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:29.730 11:30:07 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:29.730 11:30:07 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:29.730 11:30:07 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:29.730 11:30:07 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:29.730 11:30:07 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:29.730 11:30:07 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:11:29.730 11:30:07 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:11:29.730 11:30:07 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:29.730 11:30:07 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:29.730 11:30:07 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:29.730 11:30:07 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:29.730 11:30:07 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:29.730 11:30:07 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:29.730 11:30:07 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:29.730 11:30:07 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:11:29.730 11:30:07 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:11:29.730 11:30:07 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:11:29.730 11:30:07 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:11:29.730 11:30:07 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:11:29.730 11:30:07 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@432 -- # nvmf_veth_init 00:11:29.730 11:30:07 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:29.730 11:30:07 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:29.730 11:30:07 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:11:29.730 11:30:07 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:11:29.730 11:30:07 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:11:29.730 11:30:07 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:11:29.730 11:30:07 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:11:29.730 11:30:07 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:29.730 11:30:07 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:11:29.730 11:30:07 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:11:29.730 11:30:07 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:11:29.730 11:30:07 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:11:29.730 11:30:07 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:11:29.730 11:30:07 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:11:29.730 Cannot find device "nvmf_tgt_br" 00:11:29.730 11:30:07 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@155 -- # true 00:11:29.730 11:30:07 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:11:29.730 Cannot find device "nvmf_tgt_br2" 00:11:29.730 11:30:07 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@156 -- # true 00:11:29.730 11:30:07 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:11:29.730 11:30:07 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:11:29.730 Cannot find device "nvmf_tgt_br" 00:11:29.730 11:30:07 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@158 -- # true 00:11:29.730 11:30:07 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:11:29.730 Cannot find device "nvmf_tgt_br2" 00:11:29.730 11:30:07 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@159 -- # true 00:11:29.730 11:30:07 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:11:29.730 11:30:07 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:11:29.730 11:30:07 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:29.730 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:29.730 11:30:07 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@162 -- # true 00:11:29.730 11:30:07 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:29.730 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:29.730 11:30:07 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@163 -- # true 00:11:29.730 11:30:07 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:11:29.730 11:30:07 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:11:29.730 11:30:07 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:11:29.730 11:30:07 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:11:29.730 11:30:07 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:11:29.730 11:30:07 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:11:29.730 11:30:07 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:11:29.730 11:30:07 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:11:29.988 11:30:07 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:11:29.988 11:30:07 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:11:29.988 11:30:07 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:11:29.988 11:30:07 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:11:29.988 11:30:07 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:11:29.988 11:30:07 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:11:29.988 11:30:07 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:11:29.988 11:30:07 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:11:29.988 11:30:07 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:11:29.989 11:30:07 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:11:29.989 11:30:07 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:11:29.989 11:30:07 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:11:29.989 11:30:07 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:11:29.989 11:30:07 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:11:29.989 11:30:07 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:11:29.989 11:30:07 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:11:29.989 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:29.989 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.101 ms 00:11:29.989 00:11:29.989 --- 10.0.0.2 ping statistics --- 00:11:29.989 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:29.989 rtt min/avg/max/mdev = 0.101/0.101/0.101/0.000 ms 00:11:29.989 11:30:07 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:11:29.989 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:11:29.989 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.047 ms 00:11:29.989 00:11:29.989 --- 10.0.0.3 ping statistics --- 00:11:29.989 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:29.989 rtt min/avg/max/mdev = 0.047/0.047/0.047/0.000 ms 00:11:29.989 11:30:07 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:11:29.989 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:29.989 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.036 ms 00:11:29.989 00:11:29.989 --- 10.0.0.1 ping statistics --- 00:11:29.989 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:29.989 rtt min/avg/max/mdev = 0.036/0.036/0.036/0.000 ms 00:11:29.989 11:30:07 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:29.989 11:30:07 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@433 -- # return 0 00:11:29.989 11:30:07 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:29.989 11:30:07 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:29.989 11:30:07 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:11:29.989 11:30:07 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:11:29.989 11:30:07 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:29.989 11:30:07 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:11:29.989 11:30:07 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:11:29.989 11:30:07 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:11:29.989 11:30:07 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:29.989 11:30:07 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@722 -- # xtrace_disable 00:11:29.989 11:30:07 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:29.989 11:30:07 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@481 -- # nvmfpid=77665 00:11:29.989 11:30:07 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@482 -- # waitforlisten 77665 00:11:29.989 11:30:07 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:11:29.989 11:30:07 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@829 -- # '[' -z 77665 ']' 00:11:29.989 11:30:07 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:29.989 11:30:07 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:29.989 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:29.989 11:30:07 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:29.989 11:30:07 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:29.989 11:30:07 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:29.989 [2024-07-15 11:30:07.407553] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:11:29.989 [2024-07-15 11:30:07.407657] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:30.247 [2024-07-15 11:30:07.546842] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:30.247 [2024-07-15 11:30:07.616497] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:30.247 [2024-07-15 11:30:07.616562] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:30.247 [2024-07-15 11:30:07.616578] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:30.247 [2024-07-15 11:30:07.616589] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:30.247 [2024-07-15 11:30:07.616598] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:30.248 [2024-07-15 11:30:07.616700] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:11:30.248 [2024-07-15 11:30:07.616784] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:11:30.248 [2024-07-15 11:30:07.618451] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:11:30.248 [2024-07-15 11:30:07.618464] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:11:31.182 11:30:08 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:31.182 11:30:08 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@862 -- # return 0 00:11:31.182 11:30:08 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:31.182 11:30:08 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@728 -- # xtrace_disable 00:11:31.182 11:30:08 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:31.182 11:30:08 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:31.182 11:30:08 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:31.182 11:30:08 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:31.182 11:30:08 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:31.182 [2024-07-15 11:30:08.424332] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:31.182 11:30:08 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:31.182 11:30:08 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:11:31.182 11:30:08 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:31.182 11:30:08 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:31.182 Malloc0 00:11:31.182 11:30:08 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:31.182 11:30:08 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:11:31.182 11:30:08 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:31.182 11:30:08 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:31.182 11:30:08 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:31.182 11:30:08 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:31.182 11:30:08 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:31.182 11:30:08 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:31.182 11:30:08 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:31.182 11:30:08 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:31.182 11:30:08 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:31.182 11:30:08 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:31.182 [2024-07-15 11:30:08.485670] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:31.182 11:30:08 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:31.182 11:30:08 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:11:31.182 11:30:08 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:11:31.182 11:30:08 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@532 -- # config=() 00:11:31.182 11:30:08 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@532 -- # local subsystem config 00:11:31.182 11:30:08 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:11:31.182 11:30:08 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:11:31.182 { 00:11:31.182 "params": { 00:11:31.182 "name": "Nvme$subsystem", 00:11:31.182 "trtype": "$TEST_TRANSPORT", 00:11:31.182 "traddr": "$NVMF_FIRST_TARGET_IP", 00:11:31.182 "adrfam": "ipv4", 00:11:31.182 "trsvcid": "$NVMF_PORT", 00:11:31.182 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:11:31.182 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:11:31.182 "hdgst": ${hdgst:-false}, 00:11:31.182 "ddgst": ${ddgst:-false} 00:11:31.182 }, 00:11:31.182 "method": "bdev_nvme_attach_controller" 00:11:31.182 } 00:11:31.182 EOF 00:11:31.182 )") 00:11:31.182 11:30:08 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@554 -- # cat 00:11:31.182 11:30:08 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@556 -- # jq . 00:11:31.182 11:30:08 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@557 -- # IFS=, 00:11:31.182 11:30:08 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:11:31.182 "params": { 00:11:31.182 "name": "Nvme1", 00:11:31.182 "trtype": "tcp", 00:11:31.182 "traddr": "10.0.0.2", 00:11:31.182 "adrfam": "ipv4", 00:11:31.182 "trsvcid": "4420", 00:11:31.182 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:11:31.182 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:11:31.182 "hdgst": false, 00:11:31.182 "ddgst": false 00:11:31.182 }, 00:11:31.182 "method": "bdev_nvme_attach_controller" 00:11:31.182 }' 00:11:31.182 [2024-07-15 11:30:08.536150] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:11:31.182 [2024-07-15 11:30:08.536236] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77719 ] 00:11:31.440 [2024-07-15 11:30:08.672145] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:11:31.440 [2024-07-15 11:30:08.745659] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:31.440 [2024-07-15 11:30:08.745725] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:11:31.440 [2024-07-15 11:30:08.745730] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:31.440 I/O targets: 00:11:31.440 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:11:31.440 00:11:31.440 00:11:31.440 CUnit - A unit testing framework for C - Version 2.1-3 00:11:31.440 http://cunit.sourceforge.net/ 00:11:31.440 00:11:31.440 00:11:31.440 Suite: bdevio tests on: Nvme1n1 00:11:31.698 Test: blockdev write read block ...passed 00:11:31.698 Test: blockdev write zeroes read block ...passed 00:11:31.698 Test: blockdev write zeroes read no split ...passed 00:11:31.698 Test: blockdev write zeroes read split ...passed 00:11:31.698 Test: blockdev write zeroes read split partial ...passed 00:11:31.698 Test: blockdev reset ...[2024-07-15 11:30:09.007106] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:11:31.698 [2024-07-15 11:30:09.007250] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2105180 (9): Bad file descriptor 00:11:31.698 [2024-07-15 11:30:09.026323] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:11:31.698 passed 00:11:31.698 Test: blockdev write read 8 blocks ...passed 00:11:31.698 Test: blockdev write read size > 128k ...passed 00:11:31.698 Test: blockdev write read invalid size ...passed 00:11:31.698 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:31.698 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:31.698 Test: blockdev write read max offset ...passed 00:11:31.698 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:31.698 Test: blockdev writev readv 8 blocks ...passed 00:11:31.698 Test: blockdev writev readv 30 x 1block ...passed 00:11:31.957 Test: blockdev writev readv block ...passed 00:11:31.957 Test: blockdev writev readv size > 128k ...passed 00:11:31.957 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:31.957 Test: blockdev comparev and writev ...[2024-07-15 11:30:09.198979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:31.957 [2024-07-15 11:30:09.199044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:11:31.957 [2024-07-15 11:30:09.199070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:31.957 [2024-07-15 11:30:09.199084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:11:31.957 [2024-07-15 11:30:09.199630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:31.957 [2024-07-15 11:30:09.199665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:11:31.957 [2024-07-15 11:30:09.199687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:31.957 [2024-07-15 11:30:09.199700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:11:31.957 [2024-07-15 11:30:09.200111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:31.957 [2024-07-15 11:30:09.200147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:11:31.957 [2024-07-15 11:30:09.200168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:31.957 [2024-07-15 11:30:09.200181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:11:31.957 [2024-07-15 11:30:09.200745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:31.957 [2024-07-15 11:30:09.200778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:11:31.957 [2024-07-15 11:30:09.200799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:31.957 [2024-07-15 11:30:09.200812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:11:31.957 passed 00:11:31.957 Test: blockdev nvme passthru rw ...passed 00:11:31.957 Test: blockdev nvme passthru vendor specific ...[2024-07-15 11:30:09.282925] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:11:31.957 [2024-07-15 11:30:09.282978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:11:31.957 [2024-07-15 11:30:09.283111] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:11:31.957 [2024-07-15 11:30:09.283130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:11:31.957 [2024-07-15 11:30:09.283260] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:11:31.957 [2024-07-15 11:30:09.283278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:11:31.957 [2024-07-15 11:30:09.283413] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:11:31.957 [2024-07-15 11:30:09.283431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:11:31.957 passed 00:11:31.957 Test: blockdev nvme admin passthru ...passed 00:11:31.957 Test: blockdev copy ...passed 00:11:31.957 00:11:31.957 Run Summary: Type Total Ran Passed Failed Inactive 00:11:31.957 suites 1 1 n/a 0 0 00:11:31.957 tests 23 23 23 0 0 00:11:31.957 asserts 152 152 152 0 n/a 00:11:31.957 00:11:31.957 Elapsed time = 0.892 seconds 00:11:32.215 11:30:09 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:32.215 11:30:09 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:32.215 11:30:09 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:32.215 11:30:09 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:32.215 11:30:09 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:11:32.215 11:30:09 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:11:32.215 11:30:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:32.215 11:30:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@117 -- # sync 00:11:32.215 11:30:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:32.215 11:30:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@120 -- # set +e 00:11:32.215 11:30:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:32.215 11:30:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:32.215 rmmod nvme_tcp 00:11:32.215 rmmod nvme_fabrics 00:11:32.215 rmmod nvme_keyring 00:11:32.215 11:30:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:32.215 11:30:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@124 -- # set -e 00:11:32.215 11:30:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@125 -- # return 0 00:11:32.215 11:30:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@489 -- # '[' -n 77665 ']' 00:11:32.215 11:30:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@490 -- # killprocess 77665 00:11:32.215 11:30:09 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@948 -- # '[' -z 77665 ']' 00:11:32.215 11:30:09 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@952 -- # kill -0 77665 00:11:32.215 11:30:09 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@953 -- # uname 00:11:32.215 11:30:09 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:11:32.215 11:30:09 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 77665 00:11:32.215 11:30:09 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@954 -- # process_name=reactor_3 00:11:32.215 11:30:09 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@958 -- # '[' reactor_3 = sudo ']' 00:11:32.215 killing process with pid 77665 00:11:32.215 11:30:09 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@966 -- # echo 'killing process with pid 77665' 00:11:32.215 11:30:09 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@967 -- # kill 77665 00:11:32.215 11:30:09 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@972 -- # wait 77665 00:11:32.473 11:30:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:32.473 11:30:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:11:32.473 11:30:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:11:32.473 11:30:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:32.473 11:30:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:32.473 11:30:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:32.473 11:30:09 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:32.473 11:30:09 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:32.473 11:30:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:11:32.473 00:11:32.473 real 0m2.901s 00:11:32.473 user 0m10.490s 00:11:32.473 sys 0m0.685s 00:11:32.473 11:30:09 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:32.473 11:30:09 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:32.473 ************************************ 00:11:32.473 END TEST nvmf_bdevio 00:11:32.473 ************************************ 00:11:32.473 11:30:09 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:11:32.473 11:30:09 nvmf_tcp -- nvmf/nvmf.sh@57 -- # run_test nvmf_auth_target /home/vagrant/spdk_repo/spdk/test/nvmf/target/auth.sh --transport=tcp 00:11:32.473 11:30:09 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:11:32.473 11:30:09 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:32.473 11:30:09 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:11:32.473 ************************************ 00:11:32.473 START TEST nvmf_auth_target 00:11:32.473 ************************************ 00:11:32.473 11:30:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/auth.sh --transport=tcp 00:11:32.473 * Looking for test storage... 00:11:32.473 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:11:32.473 11:30:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:11:32.473 11:30:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:11:32.473 11:30:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:32.473 11:30:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:32.473 11:30:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:32.473 11:30:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:32.473 11:30:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:32.473 11:30:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:32.473 11:30:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:32.473 11:30:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:32.473 11:30:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:32.473 11:30:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:32.473 11:30:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:891080d4-f96c-4735-b9e2-e3ce9892e421 00:11:32.473 11:30:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=891080d4-f96c-4735-b9e2-e3ce9892e421 00:11:32.473 11:30:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:32.473 11:30:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:32.473 11:30:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:11:32.473 11:30:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:32.473 11:30:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:32.732 11:30:09 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:32.732 11:30:09 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:32.732 11:30:09 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:32.732 11:30:09 nvmf_tcp.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:32.732 11:30:09 nvmf_tcp.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:32.732 11:30:09 nvmf_tcp.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:32.732 11:30:09 nvmf_tcp.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:11:32.732 11:30:09 nvmf_tcp.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:32.732 11:30:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@47 -- # : 0 00:11:32.732 11:30:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:32.732 11:30:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:32.732 11:30:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:32.732 11:30:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:32.732 11:30:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:32.732 11:30:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:32.732 11:30:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:32.732 11:30:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:32.732 11:30:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:11:32.732 11:30:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:11:32.732 11:30:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:11:32.732 11:30:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:891080d4-f96c-4735-b9e2-e3ce9892e421 00:11:32.732 11:30:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:11:32.732 11:30:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:11:32.732 11:30:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:11:32.732 11:30:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@59 -- # nvmftestinit 00:11:32.732 11:30:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:11:32.732 11:30:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:32.732 11:30:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:32.732 11:30:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:32.732 11:30:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:32.732 11:30:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:32.732 11:30:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:32.732 11:30:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:32.732 11:30:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:11:32.732 11:30:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:11:32.732 11:30:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:11:32.732 11:30:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:11:32.732 11:30:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:11:32.732 11:30:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@432 -- # nvmf_veth_init 00:11:32.732 11:30:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:32.732 11:30:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:32.732 11:30:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:11:32.732 11:30:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:11:32.732 11:30:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:11:32.732 11:30:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:11:32.732 11:30:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:11:32.732 11:30:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:32.732 11:30:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:11:32.732 11:30:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:11:32.732 11:30:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:11:32.732 11:30:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:11:32.732 11:30:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:11:32.732 11:30:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:11:32.732 Cannot find device "nvmf_tgt_br" 00:11:32.732 11:30:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@155 -- # true 00:11:32.732 11:30:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:11:32.732 Cannot find device "nvmf_tgt_br2" 00:11:32.732 11:30:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@156 -- # true 00:11:32.732 11:30:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:11:32.732 11:30:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:11:32.732 Cannot find device "nvmf_tgt_br" 00:11:32.732 11:30:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@158 -- # true 00:11:32.732 11:30:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:11:32.732 Cannot find device "nvmf_tgt_br2" 00:11:32.732 11:30:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@159 -- # true 00:11:32.732 11:30:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:11:32.732 11:30:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:11:32.733 11:30:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:32.733 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:32.733 11:30:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@162 -- # true 00:11:32.733 11:30:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:32.733 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:32.733 11:30:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@163 -- # true 00:11:32.733 11:30:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:11:32.733 11:30:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:11:32.733 11:30:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:11:32.733 11:30:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:11:32.733 11:30:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:11:32.733 11:30:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:11:32.733 11:30:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:11:32.733 11:30:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:11:32.733 11:30:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:11:32.733 11:30:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:11:32.733 11:30:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:11:32.733 11:30:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:11:32.733 11:30:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:11:32.733 11:30:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:11:32.733 11:30:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:11:33.038 11:30:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:11:33.038 11:30:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:11:33.038 11:30:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:11:33.038 11:30:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:11:33.038 11:30:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:11:33.038 11:30:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:11:33.038 11:30:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:11:33.038 11:30:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:11:33.038 11:30:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:11:33.038 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:33.038 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.068 ms 00:11:33.038 00:11:33.039 --- 10.0.0.2 ping statistics --- 00:11:33.039 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:33.039 rtt min/avg/max/mdev = 0.068/0.068/0.068/0.000 ms 00:11:33.039 11:30:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:11:33.039 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:11:33.039 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.059 ms 00:11:33.039 00:11:33.039 --- 10.0.0.3 ping statistics --- 00:11:33.039 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:33.039 rtt min/avg/max/mdev = 0.059/0.059/0.059/0.000 ms 00:11:33.039 11:30:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:11:33.039 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:33.039 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.041 ms 00:11:33.039 00:11:33.039 --- 10.0.0.1 ping statistics --- 00:11:33.039 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:33.039 rtt min/avg/max/mdev = 0.041/0.041/0.041/0.000 ms 00:11:33.039 11:30:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:33.039 11:30:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@433 -- # return 0 00:11:33.039 11:30:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:33.039 11:30:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:33.039 11:30:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:11:33.039 11:30:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:11:33.039 11:30:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:33.039 11:30:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:11:33.039 11:30:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:11:33.039 11:30:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@60 -- # nvmfappstart -L nvmf_auth 00:11:33.039 11:30:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:33.039 11:30:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@722 -- # xtrace_disable 00:11:33.039 11:30:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:33.039 11:30:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=77893 00:11:33.039 11:30:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:11:33.039 11:30:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 77893 00:11:33.039 11:30:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 77893 ']' 00:11:33.039 11:30:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:33.039 11:30:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:33.039 11:30:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:33.039 11:30:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:33.039 11:30:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:33.297 11:30:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:33.297 11:30:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:11:33.297 11:30:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:33.297 11:30:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@728 -- # xtrace_disable 00:11:33.297 11:30:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:33.297 11:30:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:33.297 11:30:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@62 -- # hostpid=77924 00:11:33.297 11:30:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:11:33.297 11:30:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@64 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:11:33.297 11:30:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key null 48 00:11:33.297 11:30:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:11:33.297 11:30:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:11:33.297 11:30:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:11:33.297 11:30:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=null 00:11:33.297 11:30:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:11:33.297 11:30:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:11:33.297 11:30:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=3946130f649f3edca20942a7eceb2a3d798816146b1a8d1f 00:11:33.297 11:30:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:11:33.297 11:30:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.cAm 00:11:33.297 11:30:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 3946130f649f3edca20942a7eceb2a3d798816146b1a8d1f 0 00:11:33.297 11:30:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 3946130f649f3edca20942a7eceb2a3d798816146b1a8d1f 0 00:11:33.297 11:30:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:11:33.297 11:30:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:11:33.297 11:30:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=3946130f649f3edca20942a7eceb2a3d798816146b1a8d1f 00:11:33.297 11:30:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=0 00:11:33.297 11:30:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:11:33.555 11:30:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.cAm 00:11:33.555 11:30:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.cAm 00:11:33.555 11:30:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # keys[0]=/tmp/spdk.key-null.cAm 00:11:33.555 11:30:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key sha512 64 00:11:33.555 11:30:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:11:33.555 11:30:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:11:33.555 11:30:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:11:33.555 11:30:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:11:33.555 11:30:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:11:33.555 11:30:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:11:33.555 11:30:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=668f7d31948138803d69216f832b7464d02a8684ac454d57a69d7a60bb6bd0cb 00:11:33.555 11:30:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:11:33.555 11:30:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.8w4 00:11:33.555 11:30:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 668f7d31948138803d69216f832b7464d02a8684ac454d57a69d7a60bb6bd0cb 3 00:11:33.555 11:30:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 668f7d31948138803d69216f832b7464d02a8684ac454d57a69d7a60bb6bd0cb 3 00:11:33.555 11:30:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:11:33.555 11:30:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:11:33.555 11:30:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=668f7d31948138803d69216f832b7464d02a8684ac454d57a69d7a60bb6bd0cb 00:11:33.555 11:30:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:11:33.555 11:30:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:11:33.555 11:30:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.8w4 00:11:33.555 11:30:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.8w4 00:11:33.555 11:30:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # ckeys[0]=/tmp/spdk.key-sha512.8w4 00:11:33.555 11:30:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha256 32 00:11:33.555 11:30:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:11:33.555 11:30:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:11:33.555 11:30:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:11:33.555 11:30:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:11:33.555 11:30:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:11:33.555 11:30:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:11:33.555 11:30:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=2b01c5467218082a89d0c73a2f033273 00:11:33.555 11:30:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:11:33.555 11:30:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.IAG 00:11:33.555 11:30:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 2b01c5467218082a89d0c73a2f033273 1 00:11:33.555 11:30:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 2b01c5467218082a89d0c73a2f033273 1 00:11:33.555 11:30:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:11:33.555 11:30:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:11:33.555 11:30:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=2b01c5467218082a89d0c73a2f033273 00:11:33.555 11:30:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:11:33.555 11:30:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:11:33.555 11:30:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.IAG 00:11:33.555 11:30:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.IAG 00:11:33.555 11:30:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # keys[1]=/tmp/spdk.key-sha256.IAG 00:11:33.555 11:30:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha384 48 00:11:33.555 11:30:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:11:33.555 11:30:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:11:33.555 11:30:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:11:33.555 11:30:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:11:33.555 11:30:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:11:33.555 11:30:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:11:33.555 11:30:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=c9ec5138e3fe371f3e3c934f983aa0eb5b9bff9fce73dcb1 00:11:33.555 11:30:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:11:33.555 11:30:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.Xk5 00:11:33.555 11:30:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key c9ec5138e3fe371f3e3c934f983aa0eb5b9bff9fce73dcb1 2 00:11:33.555 11:30:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 c9ec5138e3fe371f3e3c934f983aa0eb5b9bff9fce73dcb1 2 00:11:33.555 11:30:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:11:33.555 11:30:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:11:33.555 11:30:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=c9ec5138e3fe371f3e3c934f983aa0eb5b9bff9fce73dcb1 00:11:33.555 11:30:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:11:33.555 11:30:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:11:33.555 11:30:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.Xk5 00:11:33.555 11:30:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.Xk5 00:11:33.555 11:30:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # ckeys[1]=/tmp/spdk.key-sha384.Xk5 00:11:33.555 11:30:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha384 48 00:11:33.555 11:30:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:11:33.555 11:30:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:11:33.555 11:30:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:11:33.555 11:30:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:11:33.555 11:30:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:11:33.555 11:30:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:11:33.555 11:30:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=bb33d8dc86be7f5b5a28bd2bc018d8802365f7770cf00a5e 00:11:33.555 11:30:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:11:33.555 11:30:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.UI6 00:11:33.555 11:30:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key bb33d8dc86be7f5b5a28bd2bc018d8802365f7770cf00a5e 2 00:11:33.555 11:30:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 bb33d8dc86be7f5b5a28bd2bc018d8802365f7770cf00a5e 2 00:11:33.555 11:30:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:11:33.555 11:30:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:11:33.555 11:30:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=bb33d8dc86be7f5b5a28bd2bc018d8802365f7770cf00a5e 00:11:33.555 11:30:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:11:33.555 11:30:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:11:33.812 11:30:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.UI6 00:11:33.812 11:30:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.UI6 00:11:33.812 11:30:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # keys[2]=/tmp/spdk.key-sha384.UI6 00:11:33.812 11:30:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha256 32 00:11:33.812 11:30:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:11:33.812 11:30:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:11:33.812 11:30:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:11:33.812 11:30:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:11:33.812 11:30:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:11:33.812 11:30:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:11:33.812 11:30:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=f5cce1d1422b58b9a6544f8739c4b08a 00:11:33.812 11:30:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:11:33.812 11:30:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.bzN 00:11:33.812 11:30:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key f5cce1d1422b58b9a6544f8739c4b08a 1 00:11:33.812 11:30:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 f5cce1d1422b58b9a6544f8739c4b08a 1 00:11:33.812 11:30:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:11:33.812 11:30:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:11:33.812 11:30:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=f5cce1d1422b58b9a6544f8739c4b08a 00:11:33.812 11:30:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:11:33.812 11:30:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:11:33.812 11:30:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.bzN 00:11:33.812 11:30:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.bzN 00:11:33.812 11:30:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # ckeys[2]=/tmp/spdk.key-sha256.bzN 00:11:33.812 11:30:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # gen_dhchap_key sha512 64 00:11:33.812 11:30:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:11:33.812 11:30:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:11:33.812 11:30:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:11:33.812 11:30:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:11:33.812 11:30:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:11:33.812 11:30:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:11:33.812 11:30:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=2c8787805e2ccf640e0614b08aec581c711fa076c911bd531c36162a51f489f8 00:11:33.812 11:30:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:11:33.812 11:30:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.h6N 00:11:33.812 11:30:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 2c8787805e2ccf640e0614b08aec581c711fa076c911bd531c36162a51f489f8 3 00:11:33.812 11:30:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 2c8787805e2ccf640e0614b08aec581c711fa076c911bd531c36162a51f489f8 3 00:11:33.812 11:30:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:11:33.812 11:30:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:11:33.812 11:30:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=2c8787805e2ccf640e0614b08aec581c711fa076c911bd531c36162a51f489f8 00:11:33.812 11:30:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:11:33.812 11:30:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:11:33.812 11:30:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.h6N 00:11:33.812 11:30:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.h6N 00:11:33.812 11:30:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # keys[3]=/tmp/spdk.key-sha512.h6N 00:11:33.812 11:30:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # ckeys[3]= 00:11:33.812 11:30:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@72 -- # waitforlisten 77893 00:11:33.812 11:30:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 77893 ']' 00:11:33.812 11:30:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:33.812 11:30:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:33.812 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:33.812 11:30:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:33.812 11:30:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:33.812 11:30:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:34.068 11:30:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:34.068 11:30:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:11:34.068 11:30:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@73 -- # waitforlisten 77924 /var/tmp/host.sock 00:11:34.068 11:30:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 77924 ']' 00:11:34.068 11:30:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/host.sock 00:11:34.069 11:30:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:34.069 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:11:34.069 11:30:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:11:34.069 11:30:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:34.069 11:30:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:34.326 11:30:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:34.326 11:30:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:11:34.326 11:30:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd 00:11:34.326 11:30:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:34.326 11:30:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:34.326 11:30:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:34.326 11:30:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:11:34.326 11:30:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.cAm 00:11:34.326 11:30:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:34.326 11:30:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:34.326 11:30:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:34.326 11:30:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.cAm 00:11:34.326 11:30:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.cAm 00:11:34.905 11:30:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha512.8w4 ]] 00:11:34.905 11:30:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.8w4 00:11:34.905 11:30:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:34.905 11:30:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:34.905 11:30:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:34.905 11:30:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.8w4 00:11:34.905 11:30:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.8w4 00:11:34.905 11:30:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:11:34.905 11:30:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.IAG 00:11:34.905 11:30:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:34.905 11:30:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:34.905 11:30:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:34.905 11:30:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.IAG 00:11:34.905 11:30:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.IAG 00:11:35.191 11:30:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha384.Xk5 ]] 00:11:35.191 11:30:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.Xk5 00:11:35.191 11:30:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:35.191 11:30:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:35.191 11:30:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:35.191 11:30:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.Xk5 00:11:35.191 11:30:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.Xk5 00:11:35.449 11:30:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:11:35.449 11:30:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.UI6 00:11:35.449 11:30:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:35.449 11:30:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:35.449 11:30:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:35.449 11:30:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.UI6 00:11:35.449 11:30:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.UI6 00:11:36.015 11:30:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha256.bzN ]] 00:11:36.015 11:30:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.bzN 00:11:36.015 11:30:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:36.015 11:30:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:36.015 11:30:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:36.015 11:30:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.bzN 00:11:36.015 11:30:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.bzN 00:11:36.015 11:30:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:11:36.015 11:30:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.h6N 00:11:36.015 11:30:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:36.015 11:30:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:36.015 11:30:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:36.015 11:30:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.h6N 00:11:36.015 11:30:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.h6N 00:11:36.272 11:30:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n '' ]] 00:11:36.272 11:30:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:11:36.272 11:30:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:11:36.272 11:30:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:36.272 11:30:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:11:36.272 11:30:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:11:36.531 11:30:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 0 00:11:36.531 11:30:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:36.531 11:30:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:11:36.531 11:30:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:11:36.531 11:30:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:11:36.531 11:30:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:36.531 11:30:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:891080d4-f96c-4735-b9e2-e3ce9892e421 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:36.531 11:30:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:36.531 11:30:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:36.531 11:30:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:36.531 11:30:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:891080d4-f96c-4735-b9e2-e3ce9892e421 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:36.531 11:30:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:891080d4-f96c-4735-b9e2-e3ce9892e421 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:36.789 00:11:37.047 11:30:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:37.047 11:30:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:37.047 11:30:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:37.305 11:30:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:37.305 11:30:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:37.305 11:30:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:37.305 11:30:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:37.305 11:30:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:37.305 11:30:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:37.305 { 00:11:37.305 "auth": { 00:11:37.305 "dhgroup": "null", 00:11:37.305 "digest": "sha256", 00:11:37.305 "state": "completed" 00:11:37.305 }, 00:11:37.305 "cntlid": 1, 00:11:37.305 "listen_address": { 00:11:37.305 "adrfam": "IPv4", 00:11:37.305 "traddr": "10.0.0.2", 00:11:37.305 "trsvcid": "4420", 00:11:37.305 "trtype": "TCP" 00:11:37.305 }, 00:11:37.305 "peer_address": { 00:11:37.305 "adrfam": "IPv4", 00:11:37.305 "traddr": "10.0.0.1", 00:11:37.305 "trsvcid": "45036", 00:11:37.305 "trtype": "TCP" 00:11:37.305 }, 00:11:37.305 "qid": 0, 00:11:37.305 "state": "enabled", 00:11:37.305 "thread": "nvmf_tgt_poll_group_000" 00:11:37.305 } 00:11:37.305 ]' 00:11:37.305 11:30:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:37.305 11:30:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:37.305 11:30:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:37.305 11:30:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:11:37.305 11:30:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:37.305 11:30:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:37.305 11:30:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:37.305 11:30:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:37.871 11:30:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:891080d4-f96c-4735-b9e2-e3ce9892e421 --hostid 891080d4-f96c-4735-b9e2-e3ce9892e421 --dhchap-secret DHHC-1:00:Mzk0NjEzMGY2NDlmM2VkY2EyMDk0MmE3ZWNlYjJhM2Q3OTg4MTYxNDZiMWE4ZDFmyEbbPA==: --dhchap-ctrl-secret DHHC-1:03:NjY4ZjdkMzE5NDgxMzg4MDNkNjkyMTZmODMyYjc0NjRkMDJhODY4NGFjNDU0ZDU3YTY5ZDdhNjBiYjZiZDBjYt73KVU=: 00:11:43.137 11:30:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:43.137 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:43.137 11:30:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:891080d4-f96c-4735-b9e2-e3ce9892e421 00:11:43.137 11:30:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:43.137 11:30:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:43.137 11:30:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:43.137 11:30:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:43.137 11:30:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:11:43.137 11:30:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:11:43.137 11:30:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 1 00:11:43.137 11:30:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:43.137 11:30:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:11:43.137 11:30:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:11:43.137 11:30:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:11:43.137 11:30:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:43.137 11:30:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:891080d4-f96c-4735-b9e2-e3ce9892e421 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:43.137 11:30:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:43.137 11:30:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:43.137 11:30:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:43.137 11:30:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:891080d4-f96c-4735-b9e2-e3ce9892e421 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:43.137 11:30:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:891080d4-f96c-4735-b9e2-e3ce9892e421 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:43.137 00:11:43.137 11:30:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:43.137 11:30:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:43.137 11:30:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:43.396 11:30:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:43.396 11:30:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:43.396 11:30:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:43.396 11:30:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:43.396 11:30:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:43.396 11:30:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:43.396 { 00:11:43.396 "auth": { 00:11:43.396 "dhgroup": "null", 00:11:43.396 "digest": "sha256", 00:11:43.396 "state": "completed" 00:11:43.396 }, 00:11:43.396 "cntlid": 3, 00:11:43.396 "listen_address": { 00:11:43.396 "adrfam": "IPv4", 00:11:43.396 "traddr": "10.0.0.2", 00:11:43.396 "trsvcid": "4420", 00:11:43.396 "trtype": "TCP" 00:11:43.396 }, 00:11:43.396 "peer_address": { 00:11:43.396 "adrfam": "IPv4", 00:11:43.396 "traddr": "10.0.0.1", 00:11:43.396 "trsvcid": "43648", 00:11:43.396 "trtype": "TCP" 00:11:43.396 }, 00:11:43.396 "qid": 0, 00:11:43.396 "state": "enabled", 00:11:43.396 "thread": "nvmf_tgt_poll_group_000" 00:11:43.396 } 00:11:43.396 ]' 00:11:43.396 11:30:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:43.396 11:30:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:43.396 11:30:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:43.396 11:30:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:11:43.396 11:30:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:43.396 11:30:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:43.396 11:30:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:43.396 11:30:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:43.962 11:30:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:891080d4-f96c-4735-b9e2-e3ce9892e421 --hostid 891080d4-f96c-4735-b9e2-e3ce9892e421 --dhchap-secret DHHC-1:01:MmIwMWM1NDY3MjE4MDgyYTg5ZDBjNzNhMmYwMzMyNzOMU4pH: --dhchap-ctrl-secret DHHC-1:02:YzllYzUxMzhlM2ZlMzcxZjNlM2M5MzRmOTgzYWEwZWI1YjliZmY5ZmNlNzNkY2Ix+7bkgA==: 00:11:44.529 11:30:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:44.529 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:44.529 11:30:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:891080d4-f96c-4735-b9e2-e3ce9892e421 00:11:44.529 11:30:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:44.529 11:30:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:44.529 11:30:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:44.529 11:30:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:44.529 11:30:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:11:44.529 11:30:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:11:44.787 11:30:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 2 00:11:44.787 11:30:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:44.787 11:30:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:11:44.787 11:30:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:11:44.787 11:30:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:11:44.787 11:30:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:44.787 11:30:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:891080d4-f96c-4735-b9e2-e3ce9892e421 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:44.787 11:30:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:44.787 11:30:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:44.787 11:30:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:44.787 11:30:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:891080d4-f96c-4735-b9e2-e3ce9892e421 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:44.787 11:30:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:891080d4-f96c-4735-b9e2-e3ce9892e421 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:45.354 00:11:45.354 11:30:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:45.354 11:30:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:45.354 11:30:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:45.613 11:30:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:45.613 11:30:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:45.613 11:30:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:45.613 11:30:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:45.613 11:30:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:45.613 11:30:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:45.613 { 00:11:45.613 "auth": { 00:11:45.613 "dhgroup": "null", 00:11:45.613 "digest": "sha256", 00:11:45.613 "state": "completed" 00:11:45.613 }, 00:11:45.613 "cntlid": 5, 00:11:45.613 "listen_address": { 00:11:45.613 "adrfam": "IPv4", 00:11:45.613 "traddr": "10.0.0.2", 00:11:45.613 "trsvcid": "4420", 00:11:45.613 "trtype": "TCP" 00:11:45.613 }, 00:11:45.613 "peer_address": { 00:11:45.613 "adrfam": "IPv4", 00:11:45.613 "traddr": "10.0.0.1", 00:11:45.613 "trsvcid": "43664", 00:11:45.613 "trtype": "TCP" 00:11:45.613 }, 00:11:45.613 "qid": 0, 00:11:45.613 "state": "enabled", 00:11:45.613 "thread": "nvmf_tgt_poll_group_000" 00:11:45.613 } 00:11:45.613 ]' 00:11:45.613 11:30:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:45.613 11:30:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:45.613 11:30:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:45.613 11:30:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:11:45.613 11:30:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:45.613 11:30:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:45.613 11:30:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:45.613 11:30:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:45.871 11:30:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:891080d4-f96c-4735-b9e2-e3ce9892e421 --hostid 891080d4-f96c-4735-b9e2-e3ce9892e421 --dhchap-secret DHHC-1:02:YmIzM2Q4ZGM4NmJlN2Y1YjVhMjhiZDJiYzAxOGQ4ODAyMzY1Zjc3NzBjZjAwYTVlTWVtCA==: --dhchap-ctrl-secret DHHC-1:01:ZjVjY2UxZDE0MjJiNThiOWE2NTQ0Zjg3MzljNGIwOGEwfG0l: 00:11:46.806 11:30:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:46.806 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:46.806 11:30:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:891080d4-f96c-4735-b9e2-e3ce9892e421 00:11:46.806 11:30:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:46.806 11:30:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:46.806 11:30:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:46.806 11:30:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:46.806 11:30:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:11:46.806 11:30:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:11:47.064 11:30:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 3 00:11:47.064 11:30:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:47.064 11:30:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:11:47.064 11:30:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:11:47.064 11:30:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:11:47.064 11:30:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:47.064 11:30:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:891080d4-f96c-4735-b9e2-e3ce9892e421 --dhchap-key key3 00:11:47.064 11:30:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:47.064 11:30:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:47.064 11:30:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:47.064 11:30:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:891080d4-f96c-4735-b9e2-e3ce9892e421 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:11:47.064 11:30:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:891080d4-f96c-4735-b9e2-e3ce9892e421 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:11:47.323 00:11:47.323 11:30:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:47.323 11:30:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:47.323 11:30:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:47.581 11:30:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:47.581 11:30:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:47.581 11:30:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:47.581 11:30:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:47.581 11:30:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:47.581 11:30:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:47.581 { 00:11:47.581 "auth": { 00:11:47.581 "dhgroup": "null", 00:11:47.581 "digest": "sha256", 00:11:47.581 "state": "completed" 00:11:47.581 }, 00:11:47.581 "cntlid": 7, 00:11:47.581 "listen_address": { 00:11:47.581 "adrfam": "IPv4", 00:11:47.581 "traddr": "10.0.0.2", 00:11:47.581 "trsvcid": "4420", 00:11:47.581 "trtype": "TCP" 00:11:47.581 }, 00:11:47.581 "peer_address": { 00:11:47.581 "adrfam": "IPv4", 00:11:47.581 "traddr": "10.0.0.1", 00:11:47.581 "trsvcid": "43696", 00:11:47.581 "trtype": "TCP" 00:11:47.581 }, 00:11:47.581 "qid": 0, 00:11:47.581 "state": "enabled", 00:11:47.581 "thread": "nvmf_tgt_poll_group_000" 00:11:47.581 } 00:11:47.581 ]' 00:11:47.581 11:30:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:47.581 11:30:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:47.581 11:30:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:47.839 11:30:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:11:47.839 11:30:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:47.839 11:30:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:47.839 11:30:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:47.839 11:30:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:48.096 11:30:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:891080d4-f96c-4735-b9e2-e3ce9892e421 --hostid 891080d4-f96c-4735-b9e2-e3ce9892e421 --dhchap-secret DHHC-1:03:MmM4Nzg3ODA1ZTJjY2Y2NDBlMDYxNGIwOGFlYzU4MWM3MTFmYTA3NmM5MTFiZDUzMWMzNjE2MmE1MWY0ODlmOHJ+gMM=: 00:11:48.682 11:30:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:48.940 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:48.940 11:30:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:891080d4-f96c-4735-b9e2-e3ce9892e421 00:11:48.940 11:30:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:48.940 11:30:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:48.940 11:30:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:48.940 11:30:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:11:48.940 11:30:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:48.940 11:30:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:11:48.940 11:30:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:11:49.196 11:30:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 0 00:11:49.196 11:30:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:49.196 11:30:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:11:49.196 11:30:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:11:49.196 11:30:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:11:49.196 11:30:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:49.196 11:30:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:891080d4-f96c-4735-b9e2-e3ce9892e421 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:49.196 11:30:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:49.196 11:30:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:49.196 11:30:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:49.196 11:30:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:891080d4-f96c-4735-b9e2-e3ce9892e421 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:49.196 11:30:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:891080d4-f96c-4735-b9e2-e3ce9892e421 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:49.453 00:11:49.453 11:30:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:49.453 11:30:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:49.453 11:30:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:49.709 11:30:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:49.709 11:30:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:49.709 11:30:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:49.709 11:30:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:49.709 11:30:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:49.709 11:30:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:49.709 { 00:11:49.709 "auth": { 00:11:49.709 "dhgroup": "ffdhe2048", 00:11:49.709 "digest": "sha256", 00:11:49.709 "state": "completed" 00:11:49.709 }, 00:11:49.709 "cntlid": 9, 00:11:49.709 "listen_address": { 00:11:49.709 "adrfam": "IPv4", 00:11:49.709 "traddr": "10.0.0.2", 00:11:49.709 "trsvcid": "4420", 00:11:49.709 "trtype": "TCP" 00:11:49.709 }, 00:11:49.709 "peer_address": { 00:11:49.709 "adrfam": "IPv4", 00:11:49.709 "traddr": "10.0.0.1", 00:11:49.709 "trsvcid": "43712", 00:11:49.709 "trtype": "TCP" 00:11:49.709 }, 00:11:49.709 "qid": 0, 00:11:49.709 "state": "enabled", 00:11:49.709 "thread": "nvmf_tgt_poll_group_000" 00:11:49.709 } 00:11:49.709 ]' 00:11:49.709 11:30:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:49.709 11:30:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:49.709 11:30:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:49.966 11:30:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:11:49.966 11:30:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:49.966 11:30:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:49.966 11:30:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:49.966 11:30:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:50.224 11:30:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:891080d4-f96c-4735-b9e2-e3ce9892e421 --hostid 891080d4-f96c-4735-b9e2-e3ce9892e421 --dhchap-secret DHHC-1:00:Mzk0NjEzMGY2NDlmM2VkY2EyMDk0MmE3ZWNlYjJhM2Q3OTg4MTYxNDZiMWE4ZDFmyEbbPA==: --dhchap-ctrl-secret DHHC-1:03:NjY4ZjdkMzE5NDgxMzg4MDNkNjkyMTZmODMyYjc0NjRkMDJhODY4NGFjNDU0ZDU3YTY5ZDdhNjBiYjZiZDBjYt73KVU=: 00:11:50.788 11:30:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:50.788 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:50.788 11:30:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:891080d4-f96c-4735-b9e2-e3ce9892e421 00:11:50.788 11:30:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:50.788 11:30:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:50.788 11:30:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:50.788 11:30:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:50.788 11:30:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:11:50.788 11:30:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:11:51.046 11:30:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 1 00:11:51.046 11:30:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:51.046 11:30:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:11:51.046 11:30:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:11:51.046 11:30:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:11:51.046 11:30:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:51.046 11:30:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:891080d4-f96c-4735-b9e2-e3ce9892e421 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:51.046 11:30:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:51.046 11:30:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:51.046 11:30:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:51.046 11:30:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:891080d4-f96c-4735-b9e2-e3ce9892e421 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:51.046 11:30:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:891080d4-f96c-4735-b9e2-e3ce9892e421 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:51.616 00:11:51.616 11:30:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:51.616 11:30:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:51.616 11:30:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:51.616 11:30:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:51.616 11:30:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:51.616 11:30:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:51.616 11:30:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:51.874 11:30:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:51.874 11:30:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:51.874 { 00:11:51.874 "auth": { 00:11:51.874 "dhgroup": "ffdhe2048", 00:11:51.874 "digest": "sha256", 00:11:51.874 "state": "completed" 00:11:51.874 }, 00:11:51.874 "cntlid": 11, 00:11:51.874 "listen_address": { 00:11:51.874 "adrfam": "IPv4", 00:11:51.874 "traddr": "10.0.0.2", 00:11:51.874 "trsvcid": "4420", 00:11:51.874 "trtype": "TCP" 00:11:51.874 }, 00:11:51.874 "peer_address": { 00:11:51.874 "adrfam": "IPv4", 00:11:51.874 "traddr": "10.0.0.1", 00:11:51.874 "trsvcid": "43734", 00:11:51.874 "trtype": "TCP" 00:11:51.874 }, 00:11:51.874 "qid": 0, 00:11:51.874 "state": "enabled", 00:11:51.874 "thread": "nvmf_tgt_poll_group_000" 00:11:51.874 } 00:11:51.874 ]' 00:11:51.874 11:30:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:51.874 11:30:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:51.875 11:30:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:51.875 11:30:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:11:51.875 11:30:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:51.875 11:30:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:51.875 11:30:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:51.875 11:30:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:52.134 11:30:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:891080d4-f96c-4735-b9e2-e3ce9892e421 --hostid 891080d4-f96c-4735-b9e2-e3ce9892e421 --dhchap-secret DHHC-1:01:MmIwMWM1NDY3MjE4MDgyYTg5ZDBjNzNhMmYwMzMyNzOMU4pH: --dhchap-ctrl-secret DHHC-1:02:YzllYzUxMzhlM2ZlMzcxZjNlM2M5MzRmOTgzYWEwZWI1YjliZmY5ZmNlNzNkY2Ix+7bkgA==: 00:11:53.065 11:30:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:53.065 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:53.065 11:30:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:891080d4-f96c-4735-b9e2-e3ce9892e421 00:11:53.065 11:30:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:53.065 11:30:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:53.065 11:30:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:53.065 11:30:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:53.065 11:30:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:11:53.065 11:30:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:11:53.065 11:30:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 2 00:11:53.065 11:30:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:53.065 11:30:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:11:53.065 11:30:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:11:53.065 11:30:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:11:53.065 11:30:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:53.065 11:30:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:891080d4-f96c-4735-b9e2-e3ce9892e421 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:53.065 11:30:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:53.065 11:30:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:53.322 11:30:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:53.322 11:30:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:891080d4-f96c-4735-b9e2-e3ce9892e421 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:53.322 11:30:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:891080d4-f96c-4735-b9e2-e3ce9892e421 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:53.579 00:11:53.579 11:30:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:53.579 11:30:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:53.579 11:30:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:53.837 11:30:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:53.837 11:30:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:53.837 11:30:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:53.837 11:30:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:53.837 11:30:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:53.837 11:30:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:53.837 { 00:11:53.837 "auth": { 00:11:53.837 "dhgroup": "ffdhe2048", 00:11:53.837 "digest": "sha256", 00:11:53.837 "state": "completed" 00:11:53.837 }, 00:11:53.837 "cntlid": 13, 00:11:53.837 "listen_address": { 00:11:53.837 "adrfam": "IPv4", 00:11:53.837 "traddr": "10.0.0.2", 00:11:53.837 "trsvcid": "4420", 00:11:53.837 "trtype": "TCP" 00:11:53.837 }, 00:11:53.837 "peer_address": { 00:11:53.837 "adrfam": "IPv4", 00:11:53.837 "traddr": "10.0.0.1", 00:11:53.837 "trsvcid": "34586", 00:11:53.837 "trtype": "TCP" 00:11:53.837 }, 00:11:53.837 "qid": 0, 00:11:53.837 "state": "enabled", 00:11:53.837 "thread": "nvmf_tgt_poll_group_000" 00:11:53.837 } 00:11:53.837 ]' 00:11:53.837 11:30:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:53.837 11:30:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:53.837 11:30:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:54.095 11:30:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:11:54.095 11:30:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:54.095 11:30:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:54.095 11:30:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:54.095 11:30:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:54.353 11:30:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:891080d4-f96c-4735-b9e2-e3ce9892e421 --hostid 891080d4-f96c-4735-b9e2-e3ce9892e421 --dhchap-secret DHHC-1:02:YmIzM2Q4ZGM4NmJlN2Y1YjVhMjhiZDJiYzAxOGQ4ODAyMzY1Zjc3NzBjZjAwYTVlTWVtCA==: --dhchap-ctrl-secret DHHC-1:01:ZjVjY2UxZDE0MjJiNThiOWE2NTQ0Zjg3MzljNGIwOGEwfG0l: 00:11:54.918 11:30:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:54.918 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:54.918 11:30:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:891080d4-f96c-4735-b9e2-e3ce9892e421 00:11:54.918 11:30:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:54.918 11:30:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:54.918 11:30:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:54.918 11:30:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:54.918 11:30:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:11:54.918 11:30:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:11:55.176 11:30:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 3 00:11:55.176 11:30:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:55.176 11:30:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:11:55.176 11:30:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:11:55.176 11:30:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:11:55.176 11:30:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:55.176 11:30:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:891080d4-f96c-4735-b9e2-e3ce9892e421 --dhchap-key key3 00:11:55.176 11:30:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:55.176 11:30:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:55.176 11:30:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:55.176 11:30:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:891080d4-f96c-4735-b9e2-e3ce9892e421 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:11:55.176 11:30:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:891080d4-f96c-4735-b9e2-e3ce9892e421 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:11:55.742 00:11:55.742 11:30:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:55.742 11:30:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:55.742 11:30:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:55.999 11:30:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:55.999 11:30:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:55.999 11:30:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:55.999 11:30:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:55.999 11:30:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:55.999 11:30:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:55.999 { 00:11:55.999 "auth": { 00:11:55.999 "dhgroup": "ffdhe2048", 00:11:55.999 "digest": "sha256", 00:11:55.999 "state": "completed" 00:11:55.999 }, 00:11:55.999 "cntlid": 15, 00:11:55.999 "listen_address": { 00:11:55.999 "adrfam": "IPv4", 00:11:55.999 "traddr": "10.0.0.2", 00:11:55.999 "trsvcid": "4420", 00:11:55.999 "trtype": "TCP" 00:11:55.999 }, 00:11:55.999 "peer_address": { 00:11:55.999 "adrfam": "IPv4", 00:11:55.999 "traddr": "10.0.0.1", 00:11:55.999 "trsvcid": "34606", 00:11:55.999 "trtype": "TCP" 00:11:55.999 }, 00:11:55.999 "qid": 0, 00:11:55.999 "state": "enabled", 00:11:55.999 "thread": "nvmf_tgt_poll_group_000" 00:11:55.999 } 00:11:55.999 ]' 00:11:55.999 11:30:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:55.999 11:30:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:55.999 11:30:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:55.999 11:30:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:11:55.999 11:30:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:55.999 11:30:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:55.999 11:30:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:55.999 11:30:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:56.257 11:30:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:891080d4-f96c-4735-b9e2-e3ce9892e421 --hostid 891080d4-f96c-4735-b9e2-e3ce9892e421 --dhchap-secret DHHC-1:03:MmM4Nzg3ODA1ZTJjY2Y2NDBlMDYxNGIwOGFlYzU4MWM3MTFmYTA3NmM5MTFiZDUzMWMzNjE2MmE1MWY0ODlmOHJ+gMM=: 00:11:57.191 11:30:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:57.191 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:57.191 11:30:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:891080d4-f96c-4735-b9e2-e3ce9892e421 00:11:57.191 11:30:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:57.191 11:30:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:57.191 11:30:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:57.191 11:30:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:11:57.191 11:30:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:57.191 11:30:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:11:57.191 11:30:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:11:57.449 11:30:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 0 00:11:57.449 11:30:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:57.449 11:30:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:11:57.449 11:30:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:11:57.449 11:30:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:11:57.449 11:30:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:57.449 11:30:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:891080d4-f96c-4735-b9e2-e3ce9892e421 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:57.449 11:30:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:57.449 11:30:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:57.449 11:30:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:57.449 11:30:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:891080d4-f96c-4735-b9e2-e3ce9892e421 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:57.449 11:30:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:891080d4-f96c-4735-b9e2-e3ce9892e421 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:57.707 00:11:57.707 11:30:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:57.707 11:30:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:57.707 11:30:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:57.964 11:30:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:57.964 11:30:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:57.964 11:30:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:57.964 11:30:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:57.964 11:30:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:57.964 11:30:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:57.964 { 00:11:57.964 "auth": { 00:11:57.964 "dhgroup": "ffdhe3072", 00:11:57.964 "digest": "sha256", 00:11:57.964 "state": "completed" 00:11:57.964 }, 00:11:57.964 "cntlid": 17, 00:11:57.964 "listen_address": { 00:11:57.964 "adrfam": "IPv4", 00:11:57.964 "traddr": "10.0.0.2", 00:11:57.964 "trsvcid": "4420", 00:11:57.964 "trtype": "TCP" 00:11:57.964 }, 00:11:57.964 "peer_address": { 00:11:57.964 "adrfam": "IPv4", 00:11:57.964 "traddr": "10.0.0.1", 00:11:57.964 "trsvcid": "34640", 00:11:57.964 "trtype": "TCP" 00:11:57.964 }, 00:11:57.964 "qid": 0, 00:11:57.964 "state": "enabled", 00:11:57.964 "thread": "nvmf_tgt_poll_group_000" 00:11:57.964 } 00:11:57.964 ]' 00:11:57.964 11:30:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:58.223 11:30:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:58.223 11:30:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:58.223 11:30:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:11:58.223 11:30:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:58.223 11:30:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:58.223 11:30:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:58.223 11:30:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:58.481 11:30:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:891080d4-f96c-4735-b9e2-e3ce9892e421 --hostid 891080d4-f96c-4735-b9e2-e3ce9892e421 --dhchap-secret DHHC-1:00:Mzk0NjEzMGY2NDlmM2VkY2EyMDk0MmE3ZWNlYjJhM2Q3OTg4MTYxNDZiMWE4ZDFmyEbbPA==: --dhchap-ctrl-secret DHHC-1:03:NjY4ZjdkMzE5NDgxMzg4MDNkNjkyMTZmODMyYjc0NjRkMDJhODY4NGFjNDU0ZDU3YTY5ZDdhNjBiYjZiZDBjYt73KVU=: 00:11:59.414 11:30:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:59.414 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:59.414 11:30:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:891080d4-f96c-4735-b9e2-e3ce9892e421 00:11:59.414 11:30:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:59.414 11:30:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:59.414 11:30:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:59.414 11:30:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:59.414 11:30:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:11:59.414 11:30:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:11:59.673 11:30:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 1 00:11:59.673 11:30:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:59.673 11:30:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:11:59.673 11:30:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:11:59.673 11:30:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:11:59.673 11:30:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:59.673 11:30:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:891080d4-f96c-4735-b9e2-e3ce9892e421 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:59.673 11:30:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:59.673 11:30:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:59.673 11:30:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:59.673 11:30:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:891080d4-f96c-4735-b9e2-e3ce9892e421 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:59.673 11:30:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:891080d4-f96c-4735-b9e2-e3ce9892e421 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:00.242 00:12:00.242 11:30:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:00.242 11:30:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:00.242 11:30:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:00.500 11:30:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:00.500 11:30:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:00.500 11:30:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:00.500 11:30:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:00.500 11:30:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:00.500 11:30:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:00.500 { 00:12:00.500 "auth": { 00:12:00.500 "dhgroup": "ffdhe3072", 00:12:00.500 "digest": "sha256", 00:12:00.500 "state": "completed" 00:12:00.500 }, 00:12:00.500 "cntlid": 19, 00:12:00.500 "listen_address": { 00:12:00.500 "adrfam": "IPv4", 00:12:00.500 "traddr": "10.0.0.2", 00:12:00.500 "trsvcid": "4420", 00:12:00.500 "trtype": "TCP" 00:12:00.500 }, 00:12:00.500 "peer_address": { 00:12:00.500 "adrfam": "IPv4", 00:12:00.500 "traddr": "10.0.0.1", 00:12:00.500 "trsvcid": "34668", 00:12:00.500 "trtype": "TCP" 00:12:00.500 }, 00:12:00.500 "qid": 0, 00:12:00.500 "state": "enabled", 00:12:00.500 "thread": "nvmf_tgt_poll_group_000" 00:12:00.500 } 00:12:00.500 ]' 00:12:00.500 11:30:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:00.500 11:30:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:00.500 11:30:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:00.500 11:30:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:12:00.500 11:30:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:00.500 11:30:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:00.500 11:30:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:00.500 11:30:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:01.067 11:30:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:891080d4-f96c-4735-b9e2-e3ce9892e421 --hostid 891080d4-f96c-4735-b9e2-e3ce9892e421 --dhchap-secret DHHC-1:01:MmIwMWM1NDY3MjE4MDgyYTg5ZDBjNzNhMmYwMzMyNzOMU4pH: --dhchap-ctrl-secret DHHC-1:02:YzllYzUxMzhlM2ZlMzcxZjNlM2M5MzRmOTgzYWEwZWI1YjliZmY5ZmNlNzNkY2Ix+7bkgA==: 00:12:01.633 11:30:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:01.633 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:01.633 11:30:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:891080d4-f96c-4735-b9e2-e3ce9892e421 00:12:01.633 11:30:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:01.633 11:30:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:01.633 11:30:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:01.634 11:30:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:01.634 11:30:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:12:01.634 11:30:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:12:01.893 11:30:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 2 00:12:01.893 11:30:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:01.893 11:30:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:12:01.893 11:30:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:12:01.893 11:30:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:12:01.893 11:30:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:01.893 11:30:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:891080d4-f96c-4735-b9e2-e3ce9892e421 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:01.893 11:30:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:01.893 11:30:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:01.893 11:30:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:01.893 11:30:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:891080d4-f96c-4735-b9e2-e3ce9892e421 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:01.893 11:30:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:891080d4-f96c-4735-b9e2-e3ce9892e421 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:02.459 00:12:02.459 11:30:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:02.459 11:30:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:02.459 11:30:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:02.716 11:30:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:02.716 11:30:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:02.716 11:30:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:02.716 11:30:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:02.716 11:30:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:02.716 11:30:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:02.716 { 00:12:02.716 "auth": { 00:12:02.716 "dhgroup": "ffdhe3072", 00:12:02.716 "digest": "sha256", 00:12:02.716 "state": "completed" 00:12:02.716 }, 00:12:02.716 "cntlid": 21, 00:12:02.716 "listen_address": { 00:12:02.716 "adrfam": "IPv4", 00:12:02.716 "traddr": "10.0.0.2", 00:12:02.716 "trsvcid": "4420", 00:12:02.716 "trtype": "TCP" 00:12:02.716 }, 00:12:02.716 "peer_address": { 00:12:02.716 "adrfam": "IPv4", 00:12:02.716 "traddr": "10.0.0.1", 00:12:02.716 "trsvcid": "36548", 00:12:02.716 "trtype": "TCP" 00:12:02.716 }, 00:12:02.716 "qid": 0, 00:12:02.716 "state": "enabled", 00:12:02.716 "thread": "nvmf_tgt_poll_group_000" 00:12:02.716 } 00:12:02.716 ]' 00:12:02.716 11:30:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:02.716 11:30:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:02.716 11:30:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:02.716 11:30:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:12:02.716 11:30:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:02.975 11:30:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:02.975 11:30:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:02.975 11:30:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:03.233 11:30:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:891080d4-f96c-4735-b9e2-e3ce9892e421 --hostid 891080d4-f96c-4735-b9e2-e3ce9892e421 --dhchap-secret DHHC-1:02:YmIzM2Q4ZGM4NmJlN2Y1YjVhMjhiZDJiYzAxOGQ4ODAyMzY1Zjc3NzBjZjAwYTVlTWVtCA==: --dhchap-ctrl-secret DHHC-1:01:ZjVjY2UxZDE0MjJiNThiOWE2NTQ0Zjg3MzljNGIwOGEwfG0l: 00:12:03.800 11:30:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:03.800 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:03.800 11:30:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:891080d4-f96c-4735-b9e2-e3ce9892e421 00:12:03.800 11:30:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:03.800 11:30:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:03.800 11:30:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:03.800 11:30:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:03.800 11:30:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:12:03.800 11:30:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:12:04.367 11:30:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 3 00:12:04.367 11:30:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:04.367 11:30:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:12:04.367 11:30:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:12:04.367 11:30:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:12:04.367 11:30:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:04.367 11:30:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:891080d4-f96c-4735-b9e2-e3ce9892e421 --dhchap-key key3 00:12:04.367 11:30:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:04.367 11:30:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:04.367 11:30:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:04.367 11:30:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:891080d4-f96c-4735-b9e2-e3ce9892e421 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:04.367 11:30:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:891080d4-f96c-4735-b9e2-e3ce9892e421 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:04.625 00:12:04.625 11:30:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:04.625 11:30:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:04.625 11:30:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:04.883 11:30:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:04.883 11:30:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:04.883 11:30:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:04.883 11:30:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:04.883 11:30:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:04.883 11:30:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:04.883 { 00:12:04.883 "auth": { 00:12:04.883 "dhgroup": "ffdhe3072", 00:12:04.883 "digest": "sha256", 00:12:04.883 "state": "completed" 00:12:04.883 }, 00:12:04.883 "cntlid": 23, 00:12:04.883 "listen_address": { 00:12:04.883 "adrfam": "IPv4", 00:12:04.883 "traddr": "10.0.0.2", 00:12:04.883 "trsvcid": "4420", 00:12:04.883 "trtype": "TCP" 00:12:04.883 }, 00:12:04.883 "peer_address": { 00:12:04.883 "adrfam": "IPv4", 00:12:04.883 "traddr": "10.0.0.1", 00:12:04.883 "trsvcid": "36560", 00:12:04.883 "trtype": "TCP" 00:12:04.883 }, 00:12:04.883 "qid": 0, 00:12:04.883 "state": "enabled", 00:12:04.883 "thread": "nvmf_tgt_poll_group_000" 00:12:04.883 } 00:12:04.883 ]' 00:12:04.883 11:30:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:05.154 11:30:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:05.154 11:30:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:05.154 11:30:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:12:05.154 11:30:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:05.154 11:30:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:05.154 11:30:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:05.154 11:30:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:05.412 11:30:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:891080d4-f96c-4735-b9e2-e3ce9892e421 --hostid 891080d4-f96c-4735-b9e2-e3ce9892e421 --dhchap-secret DHHC-1:03:MmM4Nzg3ODA1ZTJjY2Y2NDBlMDYxNGIwOGFlYzU4MWM3MTFmYTA3NmM5MTFiZDUzMWMzNjE2MmE1MWY0ODlmOHJ+gMM=: 00:12:05.976 11:30:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:05.976 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:05.976 11:30:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:891080d4-f96c-4735-b9e2-e3ce9892e421 00:12:05.976 11:30:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:05.976 11:30:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:05.976 11:30:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:05.976 11:30:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:12:05.976 11:30:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:05.976 11:30:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:12:05.976 11:30:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:12:06.540 11:30:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 0 00:12:06.540 11:30:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:06.540 11:30:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:12:06.540 11:30:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:12:06.540 11:30:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:12:06.540 11:30:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:06.540 11:30:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:891080d4-f96c-4735-b9e2-e3ce9892e421 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:06.540 11:30:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:06.540 11:30:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:06.540 11:30:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:06.540 11:30:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:891080d4-f96c-4735-b9e2-e3ce9892e421 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:06.540 11:30:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:891080d4-f96c-4735-b9e2-e3ce9892e421 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:06.798 00:12:06.798 11:30:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:06.798 11:30:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:06.798 11:30:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:07.055 11:30:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:07.055 11:30:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:07.055 11:30:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:07.055 11:30:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:07.055 11:30:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:07.055 11:30:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:07.055 { 00:12:07.055 "auth": { 00:12:07.055 "dhgroup": "ffdhe4096", 00:12:07.055 "digest": "sha256", 00:12:07.055 "state": "completed" 00:12:07.055 }, 00:12:07.055 "cntlid": 25, 00:12:07.055 "listen_address": { 00:12:07.055 "adrfam": "IPv4", 00:12:07.055 "traddr": "10.0.0.2", 00:12:07.055 "trsvcid": "4420", 00:12:07.055 "trtype": "TCP" 00:12:07.055 }, 00:12:07.055 "peer_address": { 00:12:07.055 "adrfam": "IPv4", 00:12:07.055 "traddr": "10.0.0.1", 00:12:07.055 "trsvcid": "36598", 00:12:07.055 "trtype": "TCP" 00:12:07.055 }, 00:12:07.055 "qid": 0, 00:12:07.055 "state": "enabled", 00:12:07.055 "thread": "nvmf_tgt_poll_group_000" 00:12:07.055 } 00:12:07.055 ]' 00:12:07.055 11:30:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:07.313 11:30:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:07.313 11:30:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:07.313 11:30:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:12:07.313 11:30:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:07.313 11:30:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:07.313 11:30:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:07.313 11:30:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:07.628 11:30:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:891080d4-f96c-4735-b9e2-e3ce9892e421 --hostid 891080d4-f96c-4735-b9e2-e3ce9892e421 --dhchap-secret DHHC-1:00:Mzk0NjEzMGY2NDlmM2VkY2EyMDk0MmE3ZWNlYjJhM2Q3OTg4MTYxNDZiMWE4ZDFmyEbbPA==: --dhchap-ctrl-secret DHHC-1:03:NjY4ZjdkMzE5NDgxMzg4MDNkNjkyMTZmODMyYjc0NjRkMDJhODY4NGFjNDU0ZDU3YTY5ZDdhNjBiYjZiZDBjYt73KVU=: 00:12:08.194 11:30:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:08.194 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:08.194 11:30:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:891080d4-f96c-4735-b9e2-e3ce9892e421 00:12:08.194 11:30:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:08.194 11:30:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:08.194 11:30:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:08.194 11:30:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:08.194 11:30:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:12:08.194 11:30:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:12:08.452 11:30:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 1 00:12:08.452 11:30:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:08.452 11:30:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:12:08.452 11:30:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:12:08.452 11:30:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:12:08.452 11:30:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:08.452 11:30:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:891080d4-f96c-4735-b9e2-e3ce9892e421 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:08.452 11:30:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:08.452 11:30:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:08.452 11:30:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:08.452 11:30:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:891080d4-f96c-4735-b9e2-e3ce9892e421 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:08.452 11:30:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:891080d4-f96c-4735-b9e2-e3ce9892e421 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:09.019 00:12:09.019 11:30:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:09.019 11:30:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:09.019 11:30:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:09.277 11:30:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:09.277 11:30:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:09.277 11:30:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:09.277 11:30:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:09.277 11:30:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:09.277 11:30:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:09.277 { 00:12:09.277 "auth": { 00:12:09.277 "dhgroup": "ffdhe4096", 00:12:09.277 "digest": "sha256", 00:12:09.277 "state": "completed" 00:12:09.277 }, 00:12:09.277 "cntlid": 27, 00:12:09.277 "listen_address": { 00:12:09.277 "adrfam": "IPv4", 00:12:09.277 "traddr": "10.0.0.2", 00:12:09.277 "trsvcid": "4420", 00:12:09.277 "trtype": "TCP" 00:12:09.277 }, 00:12:09.277 "peer_address": { 00:12:09.277 "adrfam": "IPv4", 00:12:09.277 "traddr": "10.0.0.1", 00:12:09.277 "trsvcid": "36628", 00:12:09.277 "trtype": "TCP" 00:12:09.277 }, 00:12:09.277 "qid": 0, 00:12:09.277 "state": "enabled", 00:12:09.277 "thread": "nvmf_tgt_poll_group_000" 00:12:09.277 } 00:12:09.277 ]' 00:12:09.277 11:30:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:09.277 11:30:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:09.277 11:30:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:09.277 11:30:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:12:09.277 11:30:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:09.277 11:30:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:09.277 11:30:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:09.277 11:30:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:09.535 11:30:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:891080d4-f96c-4735-b9e2-e3ce9892e421 --hostid 891080d4-f96c-4735-b9e2-e3ce9892e421 --dhchap-secret DHHC-1:01:MmIwMWM1NDY3MjE4MDgyYTg5ZDBjNzNhMmYwMzMyNzOMU4pH: --dhchap-ctrl-secret DHHC-1:02:YzllYzUxMzhlM2ZlMzcxZjNlM2M5MzRmOTgzYWEwZWI1YjliZmY5ZmNlNzNkY2Ix+7bkgA==: 00:12:10.471 11:30:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:10.471 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:10.471 11:30:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:891080d4-f96c-4735-b9e2-e3ce9892e421 00:12:10.471 11:30:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:10.471 11:30:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:10.471 11:30:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:10.471 11:30:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:10.471 11:30:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:12:10.471 11:30:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:12:10.729 11:30:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 2 00:12:10.729 11:30:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:10.729 11:30:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:12:10.729 11:30:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:12:10.729 11:30:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:12:10.729 11:30:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:10.729 11:30:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:891080d4-f96c-4735-b9e2-e3ce9892e421 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:10.729 11:30:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:10.729 11:30:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:10.729 11:30:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:10.729 11:30:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:891080d4-f96c-4735-b9e2-e3ce9892e421 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:10.729 11:30:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:891080d4-f96c-4735-b9e2-e3ce9892e421 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:10.987 00:12:10.987 11:30:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:10.987 11:30:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:10.987 11:30:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:11.554 11:30:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:11.554 11:30:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:11.554 11:30:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:11.554 11:30:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:11.554 11:30:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:11.554 11:30:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:11.554 { 00:12:11.554 "auth": { 00:12:11.554 "dhgroup": "ffdhe4096", 00:12:11.554 "digest": "sha256", 00:12:11.554 "state": "completed" 00:12:11.554 }, 00:12:11.554 "cntlid": 29, 00:12:11.554 "listen_address": { 00:12:11.554 "adrfam": "IPv4", 00:12:11.554 "traddr": "10.0.0.2", 00:12:11.554 "trsvcid": "4420", 00:12:11.554 "trtype": "TCP" 00:12:11.554 }, 00:12:11.554 "peer_address": { 00:12:11.554 "adrfam": "IPv4", 00:12:11.554 "traddr": "10.0.0.1", 00:12:11.554 "trsvcid": "36664", 00:12:11.554 "trtype": "TCP" 00:12:11.554 }, 00:12:11.554 "qid": 0, 00:12:11.554 "state": "enabled", 00:12:11.554 "thread": "nvmf_tgt_poll_group_000" 00:12:11.554 } 00:12:11.554 ]' 00:12:11.554 11:30:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:11.554 11:30:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:11.554 11:30:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:11.554 11:30:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:12:11.554 11:30:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:11.554 11:30:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:11.554 11:30:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:11.554 11:30:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:11.811 11:30:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:891080d4-f96c-4735-b9e2-e3ce9892e421 --hostid 891080d4-f96c-4735-b9e2-e3ce9892e421 --dhchap-secret DHHC-1:02:YmIzM2Q4ZGM4NmJlN2Y1YjVhMjhiZDJiYzAxOGQ4ODAyMzY1Zjc3NzBjZjAwYTVlTWVtCA==: --dhchap-ctrl-secret DHHC-1:01:ZjVjY2UxZDE0MjJiNThiOWE2NTQ0Zjg3MzljNGIwOGEwfG0l: 00:12:12.741 11:30:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:12.741 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:12.741 11:30:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:891080d4-f96c-4735-b9e2-e3ce9892e421 00:12:12.741 11:30:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:12.741 11:30:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:12.741 11:30:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:12.741 11:30:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:12.741 11:30:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:12:12.741 11:30:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:12:12.999 11:30:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 3 00:12:12.999 11:30:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:12.999 11:30:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:12:12.999 11:30:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:12:12.999 11:30:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:12:12.999 11:30:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:12.999 11:30:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:891080d4-f96c-4735-b9e2-e3ce9892e421 --dhchap-key key3 00:12:12.999 11:30:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:12.999 11:30:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:12.999 11:30:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:12.999 11:30:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:891080d4-f96c-4735-b9e2-e3ce9892e421 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:12.999 11:30:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:891080d4-f96c-4735-b9e2-e3ce9892e421 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:13.257 00:12:13.257 11:30:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:13.257 11:30:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:13.257 11:30:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:13.515 11:30:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:13.515 11:30:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:13.515 11:30:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:13.515 11:30:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:13.515 11:30:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:13.515 11:30:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:13.515 { 00:12:13.515 "auth": { 00:12:13.515 "dhgroup": "ffdhe4096", 00:12:13.515 "digest": "sha256", 00:12:13.515 "state": "completed" 00:12:13.515 }, 00:12:13.515 "cntlid": 31, 00:12:13.515 "listen_address": { 00:12:13.515 "adrfam": "IPv4", 00:12:13.515 "traddr": "10.0.0.2", 00:12:13.515 "trsvcid": "4420", 00:12:13.515 "trtype": "TCP" 00:12:13.515 }, 00:12:13.516 "peer_address": { 00:12:13.516 "adrfam": "IPv4", 00:12:13.516 "traddr": "10.0.0.1", 00:12:13.516 "trsvcid": "55112", 00:12:13.516 "trtype": "TCP" 00:12:13.516 }, 00:12:13.516 "qid": 0, 00:12:13.516 "state": "enabled", 00:12:13.516 "thread": "nvmf_tgt_poll_group_000" 00:12:13.516 } 00:12:13.516 ]' 00:12:13.516 11:30:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:13.774 11:30:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:13.774 11:30:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:13.774 11:30:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:12:13.774 11:30:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:13.774 11:30:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:13.774 11:30:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:13.774 11:30:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:14.031 11:30:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:891080d4-f96c-4735-b9e2-e3ce9892e421 --hostid 891080d4-f96c-4735-b9e2-e3ce9892e421 --dhchap-secret DHHC-1:03:MmM4Nzg3ODA1ZTJjY2Y2NDBlMDYxNGIwOGFlYzU4MWM3MTFmYTA3NmM5MTFiZDUzMWMzNjE2MmE1MWY0ODlmOHJ+gMM=: 00:12:14.966 11:30:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:14.966 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:14.966 11:30:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:891080d4-f96c-4735-b9e2-e3ce9892e421 00:12:14.966 11:30:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:14.966 11:30:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:14.966 11:30:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:14.966 11:30:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:12:14.966 11:30:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:14.966 11:30:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:12:14.966 11:30:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:12:14.966 11:30:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 0 00:12:14.966 11:30:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:14.966 11:30:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:12:14.966 11:30:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:12:14.966 11:30:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:12:14.966 11:30:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:14.966 11:30:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:891080d4-f96c-4735-b9e2-e3ce9892e421 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:14.966 11:30:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:14.966 11:30:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:14.966 11:30:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:14.966 11:30:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:891080d4-f96c-4735-b9e2-e3ce9892e421 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:14.966 11:30:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:891080d4-f96c-4735-b9e2-e3ce9892e421 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:15.532 00:12:15.532 11:30:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:15.532 11:30:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:15.532 11:30:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:15.789 11:30:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:15.790 11:30:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:15.790 11:30:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:15.790 11:30:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:15.790 11:30:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:15.790 11:30:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:15.790 { 00:12:15.790 "auth": { 00:12:15.790 "dhgroup": "ffdhe6144", 00:12:15.790 "digest": "sha256", 00:12:15.790 "state": "completed" 00:12:15.790 }, 00:12:15.790 "cntlid": 33, 00:12:15.790 "listen_address": { 00:12:15.790 "adrfam": "IPv4", 00:12:15.790 "traddr": "10.0.0.2", 00:12:15.790 "trsvcid": "4420", 00:12:15.790 "trtype": "TCP" 00:12:15.790 }, 00:12:15.790 "peer_address": { 00:12:15.790 "adrfam": "IPv4", 00:12:15.790 "traddr": "10.0.0.1", 00:12:15.790 "trsvcid": "55150", 00:12:15.790 "trtype": "TCP" 00:12:15.790 }, 00:12:15.790 "qid": 0, 00:12:15.790 "state": "enabled", 00:12:15.790 "thread": "nvmf_tgt_poll_group_000" 00:12:15.790 } 00:12:15.790 ]' 00:12:15.790 11:30:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:15.790 11:30:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:15.790 11:30:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:16.047 11:30:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:12:16.047 11:30:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:16.047 11:30:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:16.047 11:30:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:16.047 11:30:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:16.305 11:30:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:891080d4-f96c-4735-b9e2-e3ce9892e421 --hostid 891080d4-f96c-4735-b9e2-e3ce9892e421 --dhchap-secret DHHC-1:00:Mzk0NjEzMGY2NDlmM2VkY2EyMDk0MmE3ZWNlYjJhM2Q3OTg4MTYxNDZiMWE4ZDFmyEbbPA==: --dhchap-ctrl-secret DHHC-1:03:NjY4ZjdkMzE5NDgxMzg4MDNkNjkyMTZmODMyYjc0NjRkMDJhODY4NGFjNDU0ZDU3YTY5ZDdhNjBiYjZiZDBjYt73KVU=: 00:12:16.870 11:30:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:16.870 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:16.870 11:30:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:891080d4-f96c-4735-b9e2-e3ce9892e421 00:12:16.870 11:30:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:16.870 11:30:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:16.870 11:30:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:16.870 11:30:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:16.870 11:30:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:12:16.870 11:30:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:12:17.127 11:30:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 1 00:12:17.127 11:30:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:17.127 11:30:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:12:17.127 11:30:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:12:17.127 11:30:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:12:17.127 11:30:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:17.127 11:30:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:891080d4-f96c-4735-b9e2-e3ce9892e421 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:17.127 11:30:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:17.127 11:30:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:17.127 11:30:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:17.127 11:30:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:891080d4-f96c-4735-b9e2-e3ce9892e421 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:17.127 11:30:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:891080d4-f96c-4735-b9e2-e3ce9892e421 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:17.692 00:12:17.692 11:30:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:17.692 11:30:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:17.693 11:30:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:17.951 11:30:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:17.951 11:30:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:17.951 11:30:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:17.951 11:30:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:17.951 11:30:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:17.951 11:30:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:17.951 { 00:12:17.951 "auth": { 00:12:17.951 "dhgroup": "ffdhe6144", 00:12:17.951 "digest": "sha256", 00:12:17.951 "state": "completed" 00:12:17.951 }, 00:12:17.951 "cntlid": 35, 00:12:17.951 "listen_address": { 00:12:17.951 "adrfam": "IPv4", 00:12:17.951 "traddr": "10.0.0.2", 00:12:17.951 "trsvcid": "4420", 00:12:17.951 "trtype": "TCP" 00:12:17.951 }, 00:12:17.951 "peer_address": { 00:12:17.951 "adrfam": "IPv4", 00:12:17.951 "traddr": "10.0.0.1", 00:12:17.951 "trsvcid": "55162", 00:12:17.951 "trtype": "TCP" 00:12:17.951 }, 00:12:17.951 "qid": 0, 00:12:17.951 "state": "enabled", 00:12:17.951 "thread": "nvmf_tgt_poll_group_000" 00:12:17.951 } 00:12:17.951 ]' 00:12:17.951 11:30:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:18.209 11:30:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:18.209 11:30:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:18.209 11:30:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:12:18.209 11:30:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:18.209 11:30:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:18.209 11:30:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:18.209 11:30:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:18.468 11:30:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:891080d4-f96c-4735-b9e2-e3ce9892e421 --hostid 891080d4-f96c-4735-b9e2-e3ce9892e421 --dhchap-secret DHHC-1:01:MmIwMWM1NDY3MjE4MDgyYTg5ZDBjNzNhMmYwMzMyNzOMU4pH: --dhchap-ctrl-secret DHHC-1:02:YzllYzUxMzhlM2ZlMzcxZjNlM2M5MzRmOTgzYWEwZWI1YjliZmY5ZmNlNzNkY2Ix+7bkgA==: 00:12:19.035 11:30:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:19.035 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:19.035 11:30:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:891080d4-f96c-4735-b9e2-e3ce9892e421 00:12:19.035 11:30:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:19.035 11:30:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:19.035 11:30:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:19.035 11:30:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:19.035 11:30:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:12:19.035 11:30:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:12:19.293 11:30:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 2 00:12:19.293 11:30:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:19.293 11:30:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:12:19.293 11:30:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:12:19.293 11:30:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:12:19.293 11:30:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:19.293 11:30:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:891080d4-f96c-4735-b9e2-e3ce9892e421 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:19.293 11:30:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:19.293 11:30:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:19.293 11:30:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:19.293 11:30:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:891080d4-f96c-4735-b9e2-e3ce9892e421 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:19.293 11:30:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:891080d4-f96c-4735-b9e2-e3ce9892e421 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:19.860 00:12:19.860 11:30:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:19.860 11:30:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:19.860 11:30:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:20.118 11:30:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:20.118 11:30:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:20.118 11:30:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:20.118 11:30:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:20.118 11:30:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:20.118 11:30:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:20.118 { 00:12:20.118 "auth": { 00:12:20.118 "dhgroup": "ffdhe6144", 00:12:20.118 "digest": "sha256", 00:12:20.118 "state": "completed" 00:12:20.118 }, 00:12:20.118 "cntlid": 37, 00:12:20.118 "listen_address": { 00:12:20.118 "adrfam": "IPv4", 00:12:20.118 "traddr": "10.0.0.2", 00:12:20.118 "trsvcid": "4420", 00:12:20.118 "trtype": "TCP" 00:12:20.118 }, 00:12:20.118 "peer_address": { 00:12:20.118 "adrfam": "IPv4", 00:12:20.118 "traddr": "10.0.0.1", 00:12:20.118 "trsvcid": "55180", 00:12:20.118 "trtype": "TCP" 00:12:20.118 }, 00:12:20.118 "qid": 0, 00:12:20.118 "state": "enabled", 00:12:20.118 "thread": "nvmf_tgt_poll_group_000" 00:12:20.118 } 00:12:20.118 ]' 00:12:20.118 11:30:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:20.118 11:30:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:20.118 11:30:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:20.118 11:30:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:12:20.118 11:30:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:20.118 11:30:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:20.118 11:30:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:20.118 11:30:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:20.685 11:30:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:891080d4-f96c-4735-b9e2-e3ce9892e421 --hostid 891080d4-f96c-4735-b9e2-e3ce9892e421 --dhchap-secret DHHC-1:02:YmIzM2Q4ZGM4NmJlN2Y1YjVhMjhiZDJiYzAxOGQ4ODAyMzY1Zjc3NzBjZjAwYTVlTWVtCA==: --dhchap-ctrl-secret DHHC-1:01:ZjVjY2UxZDE0MjJiNThiOWE2NTQ0Zjg3MzljNGIwOGEwfG0l: 00:12:21.251 11:30:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:21.251 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:21.251 11:30:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:891080d4-f96c-4735-b9e2-e3ce9892e421 00:12:21.251 11:30:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:21.251 11:30:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:21.251 11:30:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:21.251 11:30:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:21.251 11:30:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:12:21.251 11:30:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:12:21.509 11:30:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 3 00:12:21.509 11:30:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:21.509 11:30:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:12:21.509 11:30:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:12:21.509 11:30:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:12:21.509 11:30:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:21.509 11:30:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:891080d4-f96c-4735-b9e2-e3ce9892e421 --dhchap-key key3 00:12:21.509 11:30:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:21.509 11:30:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:21.509 11:30:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:21.509 11:30:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:891080d4-f96c-4735-b9e2-e3ce9892e421 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:21.509 11:30:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:891080d4-f96c-4735-b9e2-e3ce9892e421 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:22.081 00:12:22.081 11:30:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:22.081 11:30:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:22.081 11:30:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:22.344 11:30:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:22.344 11:30:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:22.344 11:30:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:22.344 11:30:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:22.344 11:30:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:22.344 11:30:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:22.344 { 00:12:22.344 "auth": { 00:12:22.344 "dhgroup": "ffdhe6144", 00:12:22.344 "digest": "sha256", 00:12:22.344 "state": "completed" 00:12:22.344 }, 00:12:22.344 "cntlid": 39, 00:12:22.344 "listen_address": { 00:12:22.344 "adrfam": "IPv4", 00:12:22.344 "traddr": "10.0.0.2", 00:12:22.344 "trsvcid": "4420", 00:12:22.344 "trtype": "TCP" 00:12:22.344 }, 00:12:22.344 "peer_address": { 00:12:22.344 "adrfam": "IPv4", 00:12:22.344 "traddr": "10.0.0.1", 00:12:22.344 "trsvcid": "38616", 00:12:22.344 "trtype": "TCP" 00:12:22.344 }, 00:12:22.344 "qid": 0, 00:12:22.344 "state": "enabled", 00:12:22.344 "thread": "nvmf_tgt_poll_group_000" 00:12:22.344 } 00:12:22.344 ]' 00:12:22.344 11:30:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:22.344 11:30:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:22.344 11:30:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:22.602 11:30:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:12:22.602 11:30:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:22.602 11:30:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:22.602 11:30:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:22.602 11:30:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:22.861 11:31:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:891080d4-f96c-4735-b9e2-e3ce9892e421 --hostid 891080d4-f96c-4735-b9e2-e3ce9892e421 --dhchap-secret DHHC-1:03:MmM4Nzg3ODA1ZTJjY2Y2NDBlMDYxNGIwOGFlYzU4MWM3MTFmYTA3NmM5MTFiZDUzMWMzNjE2MmE1MWY0ODlmOHJ+gMM=: 00:12:23.428 11:31:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:23.428 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:23.428 11:31:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:891080d4-f96c-4735-b9e2-e3ce9892e421 00:12:23.428 11:31:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:23.428 11:31:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:23.428 11:31:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:23.428 11:31:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:12:23.428 11:31:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:23.428 11:31:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:12:23.428 11:31:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:12:23.687 11:31:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 0 00:12:23.687 11:31:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:23.687 11:31:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:12:23.687 11:31:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:12:23.687 11:31:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:12:23.687 11:31:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:23.687 11:31:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:891080d4-f96c-4735-b9e2-e3ce9892e421 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:23.687 11:31:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:23.687 11:31:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:23.687 11:31:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:23.687 11:31:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:891080d4-f96c-4735-b9e2-e3ce9892e421 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:23.687 11:31:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:891080d4-f96c-4735-b9e2-e3ce9892e421 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:24.623 00:12:24.623 11:31:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:24.623 11:31:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:24.623 11:31:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:24.623 11:31:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:24.623 11:31:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:24.623 11:31:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:24.623 11:31:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:24.623 11:31:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:24.623 11:31:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:24.623 { 00:12:24.623 "auth": { 00:12:24.623 "dhgroup": "ffdhe8192", 00:12:24.623 "digest": "sha256", 00:12:24.623 "state": "completed" 00:12:24.623 }, 00:12:24.623 "cntlid": 41, 00:12:24.623 "listen_address": { 00:12:24.623 "adrfam": "IPv4", 00:12:24.623 "traddr": "10.0.0.2", 00:12:24.623 "trsvcid": "4420", 00:12:24.623 "trtype": "TCP" 00:12:24.623 }, 00:12:24.623 "peer_address": { 00:12:24.623 "adrfam": "IPv4", 00:12:24.623 "traddr": "10.0.0.1", 00:12:24.623 "trsvcid": "38646", 00:12:24.623 "trtype": "TCP" 00:12:24.623 }, 00:12:24.623 "qid": 0, 00:12:24.623 "state": "enabled", 00:12:24.623 "thread": "nvmf_tgt_poll_group_000" 00:12:24.623 } 00:12:24.623 ]' 00:12:24.623 11:31:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:24.882 11:31:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:24.882 11:31:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:24.882 11:31:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:12:24.882 11:31:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:24.882 11:31:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:24.882 11:31:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:24.882 11:31:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:25.140 11:31:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:891080d4-f96c-4735-b9e2-e3ce9892e421 --hostid 891080d4-f96c-4735-b9e2-e3ce9892e421 --dhchap-secret DHHC-1:00:Mzk0NjEzMGY2NDlmM2VkY2EyMDk0MmE3ZWNlYjJhM2Q3OTg4MTYxNDZiMWE4ZDFmyEbbPA==: --dhchap-ctrl-secret DHHC-1:03:NjY4ZjdkMzE5NDgxMzg4MDNkNjkyMTZmODMyYjc0NjRkMDJhODY4NGFjNDU0ZDU3YTY5ZDdhNjBiYjZiZDBjYt73KVU=: 00:12:26.076 11:31:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:26.076 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:26.076 11:31:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:891080d4-f96c-4735-b9e2-e3ce9892e421 00:12:26.076 11:31:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:26.076 11:31:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:26.076 11:31:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:26.076 11:31:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:26.076 11:31:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:12:26.076 11:31:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:12:26.333 11:31:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 1 00:12:26.333 11:31:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:26.333 11:31:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:12:26.333 11:31:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:12:26.333 11:31:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:12:26.333 11:31:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:26.333 11:31:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:891080d4-f96c-4735-b9e2-e3ce9892e421 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:26.333 11:31:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:26.333 11:31:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:26.333 11:31:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:26.333 11:31:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:891080d4-f96c-4735-b9e2-e3ce9892e421 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:26.333 11:31:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:891080d4-f96c-4735-b9e2-e3ce9892e421 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:26.899 00:12:26.899 11:31:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:26.899 11:31:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:26.899 11:31:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:27.156 11:31:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:27.156 11:31:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:27.156 11:31:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:27.156 11:31:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:27.156 11:31:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:27.156 11:31:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:27.156 { 00:12:27.156 "auth": { 00:12:27.156 "dhgroup": "ffdhe8192", 00:12:27.156 "digest": "sha256", 00:12:27.156 "state": "completed" 00:12:27.156 }, 00:12:27.156 "cntlid": 43, 00:12:27.156 "listen_address": { 00:12:27.156 "adrfam": "IPv4", 00:12:27.156 "traddr": "10.0.0.2", 00:12:27.156 "trsvcid": "4420", 00:12:27.156 "trtype": "TCP" 00:12:27.156 }, 00:12:27.156 "peer_address": { 00:12:27.156 "adrfam": "IPv4", 00:12:27.156 "traddr": "10.0.0.1", 00:12:27.156 "trsvcid": "38674", 00:12:27.156 "trtype": "TCP" 00:12:27.156 }, 00:12:27.156 "qid": 0, 00:12:27.156 "state": "enabled", 00:12:27.156 "thread": "nvmf_tgt_poll_group_000" 00:12:27.156 } 00:12:27.156 ]' 00:12:27.156 11:31:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:27.156 11:31:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:27.156 11:31:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:27.413 11:31:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:12:27.413 11:31:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:27.413 11:31:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:27.413 11:31:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:27.413 11:31:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:27.670 11:31:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:891080d4-f96c-4735-b9e2-e3ce9892e421 --hostid 891080d4-f96c-4735-b9e2-e3ce9892e421 --dhchap-secret DHHC-1:01:MmIwMWM1NDY3MjE4MDgyYTg5ZDBjNzNhMmYwMzMyNzOMU4pH: --dhchap-ctrl-secret DHHC-1:02:YzllYzUxMzhlM2ZlMzcxZjNlM2M5MzRmOTgzYWEwZWI1YjliZmY5ZmNlNzNkY2Ix+7bkgA==: 00:12:28.234 11:31:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:28.234 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:28.234 11:31:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:891080d4-f96c-4735-b9e2-e3ce9892e421 00:12:28.234 11:31:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:28.234 11:31:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:28.234 11:31:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:28.234 11:31:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:28.234 11:31:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:12:28.234 11:31:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:12:28.797 11:31:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 2 00:12:28.797 11:31:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:28.797 11:31:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:12:28.797 11:31:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:12:28.797 11:31:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:12:28.797 11:31:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:28.797 11:31:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:891080d4-f96c-4735-b9e2-e3ce9892e421 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:28.797 11:31:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:28.797 11:31:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:28.797 11:31:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:28.797 11:31:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:891080d4-f96c-4735-b9e2-e3ce9892e421 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:28.797 11:31:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:891080d4-f96c-4735-b9e2-e3ce9892e421 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:29.363 00:12:29.363 11:31:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:29.363 11:31:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:29.363 11:31:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:29.622 11:31:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:29.622 11:31:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:29.622 11:31:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:29.622 11:31:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:29.622 11:31:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:29.622 11:31:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:29.622 { 00:12:29.622 "auth": { 00:12:29.622 "dhgroup": "ffdhe8192", 00:12:29.622 "digest": "sha256", 00:12:29.622 "state": "completed" 00:12:29.622 }, 00:12:29.622 "cntlid": 45, 00:12:29.622 "listen_address": { 00:12:29.622 "adrfam": "IPv4", 00:12:29.622 "traddr": "10.0.0.2", 00:12:29.622 "trsvcid": "4420", 00:12:29.622 "trtype": "TCP" 00:12:29.622 }, 00:12:29.622 "peer_address": { 00:12:29.622 "adrfam": "IPv4", 00:12:29.622 "traddr": "10.0.0.1", 00:12:29.622 "trsvcid": "38698", 00:12:29.622 "trtype": "TCP" 00:12:29.622 }, 00:12:29.622 "qid": 0, 00:12:29.622 "state": "enabled", 00:12:29.622 "thread": "nvmf_tgt_poll_group_000" 00:12:29.622 } 00:12:29.622 ]' 00:12:29.622 11:31:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:29.622 11:31:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:29.622 11:31:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:29.880 11:31:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:12:29.880 11:31:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:29.880 11:31:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:29.880 11:31:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:29.880 11:31:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:30.139 11:31:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:891080d4-f96c-4735-b9e2-e3ce9892e421 --hostid 891080d4-f96c-4735-b9e2-e3ce9892e421 --dhchap-secret DHHC-1:02:YmIzM2Q4ZGM4NmJlN2Y1YjVhMjhiZDJiYzAxOGQ4ODAyMzY1Zjc3NzBjZjAwYTVlTWVtCA==: --dhchap-ctrl-secret DHHC-1:01:ZjVjY2UxZDE0MjJiNThiOWE2NTQ0Zjg3MzljNGIwOGEwfG0l: 00:12:30.705 11:31:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:30.705 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:30.705 11:31:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:891080d4-f96c-4735-b9e2-e3ce9892e421 00:12:30.705 11:31:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:30.705 11:31:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:30.705 11:31:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:30.705 11:31:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:30.705 11:31:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:12:30.705 11:31:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:12:30.964 11:31:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 3 00:12:30.964 11:31:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:30.964 11:31:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:12:30.964 11:31:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:12:30.964 11:31:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:12:30.964 11:31:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:30.964 11:31:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:891080d4-f96c-4735-b9e2-e3ce9892e421 --dhchap-key key3 00:12:30.964 11:31:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:30.964 11:31:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:30.964 11:31:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:30.964 11:31:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:891080d4-f96c-4735-b9e2-e3ce9892e421 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:30.964 11:31:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:891080d4-f96c-4735-b9e2-e3ce9892e421 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:31.531 00:12:31.789 11:31:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:31.789 11:31:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:31.789 11:31:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:32.047 11:31:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:32.047 11:31:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:32.047 11:31:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:32.047 11:31:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:32.047 11:31:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:32.047 11:31:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:32.047 { 00:12:32.047 "auth": { 00:12:32.047 "dhgroup": "ffdhe8192", 00:12:32.047 "digest": "sha256", 00:12:32.047 "state": "completed" 00:12:32.047 }, 00:12:32.047 "cntlid": 47, 00:12:32.047 "listen_address": { 00:12:32.047 "adrfam": "IPv4", 00:12:32.047 "traddr": "10.0.0.2", 00:12:32.047 "trsvcid": "4420", 00:12:32.047 "trtype": "TCP" 00:12:32.047 }, 00:12:32.047 "peer_address": { 00:12:32.047 "adrfam": "IPv4", 00:12:32.047 "traddr": "10.0.0.1", 00:12:32.047 "trsvcid": "38722", 00:12:32.047 "trtype": "TCP" 00:12:32.047 }, 00:12:32.047 "qid": 0, 00:12:32.047 "state": "enabled", 00:12:32.047 "thread": "nvmf_tgt_poll_group_000" 00:12:32.047 } 00:12:32.047 ]' 00:12:32.047 11:31:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:32.047 11:31:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:32.047 11:31:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:32.047 11:31:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:12:32.047 11:31:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:32.047 11:31:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:32.047 11:31:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:32.048 11:31:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:32.306 11:31:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:891080d4-f96c-4735-b9e2-e3ce9892e421 --hostid 891080d4-f96c-4735-b9e2-e3ce9892e421 --dhchap-secret DHHC-1:03:MmM4Nzg3ODA1ZTJjY2Y2NDBlMDYxNGIwOGFlYzU4MWM3MTFmYTA3NmM5MTFiZDUzMWMzNjE2MmE1MWY0ODlmOHJ+gMM=: 00:12:33.241 11:31:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:33.241 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:33.241 11:31:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:891080d4-f96c-4735-b9e2-e3ce9892e421 00:12:33.241 11:31:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:33.241 11:31:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:33.241 11:31:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:33.241 11:31:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:12:33.241 11:31:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:12:33.241 11:31:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:33.241 11:31:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:12:33.241 11:31:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:12:33.499 11:31:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 0 00:12:33.499 11:31:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:33.499 11:31:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:12:33.499 11:31:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:12:33.499 11:31:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:12:33.499 11:31:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:33.499 11:31:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:891080d4-f96c-4735-b9e2-e3ce9892e421 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:33.499 11:31:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:33.499 11:31:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:33.499 11:31:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:33.499 11:31:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:891080d4-f96c-4735-b9e2-e3ce9892e421 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:33.499 11:31:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:891080d4-f96c-4735-b9e2-e3ce9892e421 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:34.066 00:12:34.066 11:31:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:34.066 11:31:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:34.066 11:31:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:34.324 11:31:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:34.324 11:31:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:34.324 11:31:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:34.324 11:31:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:34.324 11:31:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:34.324 11:31:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:34.324 { 00:12:34.324 "auth": { 00:12:34.324 "dhgroup": "null", 00:12:34.324 "digest": "sha384", 00:12:34.324 "state": "completed" 00:12:34.324 }, 00:12:34.324 "cntlid": 49, 00:12:34.324 "listen_address": { 00:12:34.324 "adrfam": "IPv4", 00:12:34.324 "traddr": "10.0.0.2", 00:12:34.324 "trsvcid": "4420", 00:12:34.324 "trtype": "TCP" 00:12:34.324 }, 00:12:34.324 "peer_address": { 00:12:34.324 "adrfam": "IPv4", 00:12:34.324 "traddr": "10.0.0.1", 00:12:34.324 "trsvcid": "56852", 00:12:34.324 "trtype": "TCP" 00:12:34.324 }, 00:12:34.324 "qid": 0, 00:12:34.324 "state": "enabled", 00:12:34.324 "thread": "nvmf_tgt_poll_group_000" 00:12:34.324 } 00:12:34.324 ]' 00:12:34.324 11:31:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:34.324 11:31:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:34.324 11:31:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:34.324 11:31:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:12:34.324 11:31:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:34.324 11:31:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:34.324 11:31:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:34.324 11:31:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:34.583 11:31:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:891080d4-f96c-4735-b9e2-e3ce9892e421 --hostid 891080d4-f96c-4735-b9e2-e3ce9892e421 --dhchap-secret DHHC-1:00:Mzk0NjEzMGY2NDlmM2VkY2EyMDk0MmE3ZWNlYjJhM2Q3OTg4MTYxNDZiMWE4ZDFmyEbbPA==: --dhchap-ctrl-secret DHHC-1:03:NjY4ZjdkMzE5NDgxMzg4MDNkNjkyMTZmODMyYjc0NjRkMDJhODY4NGFjNDU0ZDU3YTY5ZDdhNjBiYjZiZDBjYt73KVU=: 00:12:35.534 11:31:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:35.534 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:35.534 11:31:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:891080d4-f96c-4735-b9e2-e3ce9892e421 00:12:35.534 11:31:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:35.534 11:31:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:35.534 11:31:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:35.534 11:31:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:35.534 11:31:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:12:35.534 11:31:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:12:35.793 11:31:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 1 00:12:35.793 11:31:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:35.793 11:31:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:12:35.793 11:31:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:12:35.793 11:31:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:12:35.793 11:31:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:35.793 11:31:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:891080d4-f96c-4735-b9e2-e3ce9892e421 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:35.793 11:31:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:35.793 11:31:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:35.793 11:31:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:35.793 11:31:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:891080d4-f96c-4735-b9e2-e3ce9892e421 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:35.793 11:31:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:891080d4-f96c-4735-b9e2-e3ce9892e421 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:36.050 00:12:36.050 11:31:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:36.050 11:31:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:36.050 11:31:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:36.308 11:31:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:36.308 11:31:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:36.308 11:31:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:36.308 11:31:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:36.308 11:31:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:36.308 11:31:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:36.308 { 00:12:36.308 "auth": { 00:12:36.308 "dhgroup": "null", 00:12:36.308 "digest": "sha384", 00:12:36.308 "state": "completed" 00:12:36.308 }, 00:12:36.308 "cntlid": 51, 00:12:36.308 "listen_address": { 00:12:36.308 "adrfam": "IPv4", 00:12:36.308 "traddr": "10.0.0.2", 00:12:36.308 "trsvcid": "4420", 00:12:36.308 "trtype": "TCP" 00:12:36.308 }, 00:12:36.308 "peer_address": { 00:12:36.308 "adrfam": "IPv4", 00:12:36.308 "traddr": "10.0.0.1", 00:12:36.308 "trsvcid": "56878", 00:12:36.308 "trtype": "TCP" 00:12:36.308 }, 00:12:36.308 "qid": 0, 00:12:36.308 "state": "enabled", 00:12:36.308 "thread": "nvmf_tgt_poll_group_000" 00:12:36.308 } 00:12:36.308 ]' 00:12:36.308 11:31:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:36.308 11:31:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:36.308 11:31:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:36.566 11:31:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:12:36.566 11:31:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:36.566 11:31:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:36.566 11:31:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:36.566 11:31:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:36.824 11:31:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:891080d4-f96c-4735-b9e2-e3ce9892e421 --hostid 891080d4-f96c-4735-b9e2-e3ce9892e421 --dhchap-secret DHHC-1:01:MmIwMWM1NDY3MjE4MDgyYTg5ZDBjNzNhMmYwMzMyNzOMU4pH: --dhchap-ctrl-secret DHHC-1:02:YzllYzUxMzhlM2ZlMzcxZjNlM2M5MzRmOTgzYWEwZWI1YjliZmY5ZmNlNzNkY2Ix+7bkgA==: 00:12:37.399 11:31:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:37.399 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:37.399 11:31:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:891080d4-f96c-4735-b9e2-e3ce9892e421 00:12:37.399 11:31:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:37.399 11:31:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:37.657 11:31:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:37.657 11:31:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:37.657 11:31:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:12:37.657 11:31:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:12:37.916 11:31:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 2 00:12:37.916 11:31:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:37.916 11:31:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:12:37.916 11:31:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:12:37.916 11:31:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:12:37.916 11:31:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:37.916 11:31:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:891080d4-f96c-4735-b9e2-e3ce9892e421 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:37.916 11:31:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:37.916 11:31:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:37.916 11:31:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:37.916 11:31:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:891080d4-f96c-4735-b9e2-e3ce9892e421 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:37.916 11:31:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:891080d4-f96c-4735-b9e2-e3ce9892e421 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:38.174 00:12:38.174 11:31:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:38.174 11:31:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:38.174 11:31:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:38.433 11:31:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:38.433 11:31:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:38.433 11:31:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:38.433 11:31:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:38.433 11:31:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:38.433 11:31:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:38.433 { 00:12:38.433 "auth": { 00:12:38.433 "dhgroup": "null", 00:12:38.433 "digest": "sha384", 00:12:38.433 "state": "completed" 00:12:38.433 }, 00:12:38.433 "cntlid": 53, 00:12:38.433 "listen_address": { 00:12:38.433 "adrfam": "IPv4", 00:12:38.433 "traddr": "10.0.0.2", 00:12:38.433 "trsvcid": "4420", 00:12:38.433 "trtype": "TCP" 00:12:38.433 }, 00:12:38.433 "peer_address": { 00:12:38.433 "adrfam": "IPv4", 00:12:38.433 "traddr": "10.0.0.1", 00:12:38.433 "trsvcid": "56898", 00:12:38.433 "trtype": "TCP" 00:12:38.433 }, 00:12:38.433 "qid": 0, 00:12:38.433 "state": "enabled", 00:12:38.433 "thread": "nvmf_tgt_poll_group_000" 00:12:38.433 } 00:12:38.433 ]' 00:12:38.433 11:31:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:38.433 11:31:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:38.433 11:31:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:38.690 11:31:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:12:38.690 11:31:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:38.690 11:31:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:38.690 11:31:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:38.690 11:31:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:38.949 11:31:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:891080d4-f96c-4735-b9e2-e3ce9892e421 --hostid 891080d4-f96c-4735-b9e2-e3ce9892e421 --dhchap-secret DHHC-1:02:YmIzM2Q4ZGM4NmJlN2Y1YjVhMjhiZDJiYzAxOGQ4ODAyMzY1Zjc3NzBjZjAwYTVlTWVtCA==: --dhchap-ctrl-secret DHHC-1:01:ZjVjY2UxZDE0MjJiNThiOWE2NTQ0Zjg3MzljNGIwOGEwfG0l: 00:12:39.884 11:31:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:39.884 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:39.884 11:31:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:891080d4-f96c-4735-b9e2-e3ce9892e421 00:12:39.884 11:31:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:39.884 11:31:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:39.884 11:31:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:39.884 11:31:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:39.884 11:31:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:12:39.884 11:31:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:12:40.142 11:31:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 3 00:12:40.142 11:31:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:40.142 11:31:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:12:40.142 11:31:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:12:40.142 11:31:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:12:40.142 11:31:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:40.142 11:31:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:891080d4-f96c-4735-b9e2-e3ce9892e421 --dhchap-key key3 00:12:40.142 11:31:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:40.142 11:31:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:40.142 11:31:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:40.143 11:31:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:891080d4-f96c-4735-b9e2-e3ce9892e421 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:40.143 11:31:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:891080d4-f96c-4735-b9e2-e3ce9892e421 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:40.400 00:12:40.400 11:31:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:40.400 11:31:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:40.400 11:31:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:40.658 11:31:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:40.658 11:31:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:40.658 11:31:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:40.658 11:31:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:40.916 11:31:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:40.916 11:31:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:40.916 { 00:12:40.916 "auth": { 00:12:40.916 "dhgroup": "null", 00:12:40.916 "digest": "sha384", 00:12:40.916 "state": "completed" 00:12:40.916 }, 00:12:40.917 "cntlid": 55, 00:12:40.917 "listen_address": { 00:12:40.917 "adrfam": "IPv4", 00:12:40.917 "traddr": "10.0.0.2", 00:12:40.917 "trsvcid": "4420", 00:12:40.917 "trtype": "TCP" 00:12:40.917 }, 00:12:40.917 "peer_address": { 00:12:40.917 "adrfam": "IPv4", 00:12:40.917 "traddr": "10.0.0.1", 00:12:40.917 "trsvcid": "56924", 00:12:40.917 "trtype": "TCP" 00:12:40.917 }, 00:12:40.917 "qid": 0, 00:12:40.917 "state": "enabled", 00:12:40.917 "thread": "nvmf_tgt_poll_group_000" 00:12:40.917 } 00:12:40.917 ]' 00:12:40.917 11:31:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:40.917 11:31:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:40.917 11:31:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:40.917 11:31:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:12:40.917 11:31:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:40.917 11:31:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:40.917 11:31:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:40.917 11:31:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:41.175 11:31:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:891080d4-f96c-4735-b9e2-e3ce9892e421 --hostid 891080d4-f96c-4735-b9e2-e3ce9892e421 --dhchap-secret DHHC-1:03:MmM4Nzg3ODA1ZTJjY2Y2NDBlMDYxNGIwOGFlYzU4MWM3MTFmYTA3NmM5MTFiZDUzMWMzNjE2MmE1MWY0ODlmOHJ+gMM=: 00:12:42.110 11:31:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:42.110 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:42.110 11:31:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:891080d4-f96c-4735-b9e2-e3ce9892e421 00:12:42.110 11:31:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:42.110 11:31:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:42.110 11:31:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:42.110 11:31:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:12:42.110 11:31:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:42.110 11:31:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:12:42.110 11:31:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:12:42.110 11:31:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 0 00:12:42.110 11:31:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:42.110 11:31:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:12:42.110 11:31:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:12:42.110 11:31:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:12:42.110 11:31:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:42.110 11:31:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:891080d4-f96c-4735-b9e2-e3ce9892e421 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:42.110 11:31:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:42.110 11:31:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:42.369 11:31:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:42.369 11:31:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:891080d4-f96c-4735-b9e2-e3ce9892e421 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:42.369 11:31:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:891080d4-f96c-4735-b9e2-e3ce9892e421 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:42.647 00:12:42.647 11:31:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:42.647 11:31:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:42.647 11:31:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:42.937 11:31:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:42.937 11:31:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:42.937 11:31:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:42.937 11:31:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:42.937 11:31:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:42.937 11:31:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:42.937 { 00:12:42.937 "auth": { 00:12:42.937 "dhgroup": "ffdhe2048", 00:12:42.937 "digest": "sha384", 00:12:42.937 "state": "completed" 00:12:42.937 }, 00:12:42.937 "cntlid": 57, 00:12:42.937 "listen_address": { 00:12:42.937 "adrfam": "IPv4", 00:12:42.937 "traddr": "10.0.0.2", 00:12:42.937 "trsvcid": "4420", 00:12:42.937 "trtype": "TCP" 00:12:42.937 }, 00:12:42.937 "peer_address": { 00:12:42.937 "adrfam": "IPv4", 00:12:42.938 "traddr": "10.0.0.1", 00:12:42.938 "trsvcid": "37416", 00:12:42.938 "trtype": "TCP" 00:12:42.938 }, 00:12:42.938 "qid": 0, 00:12:42.938 "state": "enabled", 00:12:42.938 "thread": "nvmf_tgt_poll_group_000" 00:12:42.938 } 00:12:42.938 ]' 00:12:42.938 11:31:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:42.938 11:31:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:42.938 11:31:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:42.938 11:31:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:12:42.938 11:31:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:43.196 11:31:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:43.196 11:31:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:43.196 11:31:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:43.454 11:31:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:891080d4-f96c-4735-b9e2-e3ce9892e421 --hostid 891080d4-f96c-4735-b9e2-e3ce9892e421 --dhchap-secret DHHC-1:00:Mzk0NjEzMGY2NDlmM2VkY2EyMDk0MmE3ZWNlYjJhM2Q3OTg4MTYxNDZiMWE4ZDFmyEbbPA==: --dhchap-ctrl-secret DHHC-1:03:NjY4ZjdkMzE5NDgxMzg4MDNkNjkyMTZmODMyYjc0NjRkMDJhODY4NGFjNDU0ZDU3YTY5ZDdhNjBiYjZiZDBjYt73KVU=: 00:12:44.021 11:31:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:44.021 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:44.021 11:31:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:891080d4-f96c-4735-b9e2-e3ce9892e421 00:12:44.021 11:31:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:44.021 11:31:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:44.021 11:31:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:44.021 11:31:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:44.021 11:31:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:12:44.021 11:31:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:12:44.279 11:31:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 1 00:12:44.279 11:31:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:44.279 11:31:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:12:44.279 11:31:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:12:44.279 11:31:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:12:44.279 11:31:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:44.279 11:31:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:891080d4-f96c-4735-b9e2-e3ce9892e421 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:44.279 11:31:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:44.279 11:31:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:44.537 11:31:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:44.537 11:31:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:891080d4-f96c-4735-b9e2-e3ce9892e421 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:44.537 11:31:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:891080d4-f96c-4735-b9e2-e3ce9892e421 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:44.795 00:12:44.795 11:31:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:44.795 11:31:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:44.795 11:31:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:45.053 11:31:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:45.053 11:31:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:45.053 11:31:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:45.053 11:31:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:45.053 11:31:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:45.053 11:31:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:45.053 { 00:12:45.053 "auth": { 00:12:45.053 "dhgroup": "ffdhe2048", 00:12:45.053 "digest": "sha384", 00:12:45.053 "state": "completed" 00:12:45.053 }, 00:12:45.053 "cntlid": 59, 00:12:45.053 "listen_address": { 00:12:45.053 "adrfam": "IPv4", 00:12:45.053 "traddr": "10.0.0.2", 00:12:45.053 "trsvcid": "4420", 00:12:45.053 "trtype": "TCP" 00:12:45.053 }, 00:12:45.053 "peer_address": { 00:12:45.053 "adrfam": "IPv4", 00:12:45.053 "traddr": "10.0.0.1", 00:12:45.053 "trsvcid": "37452", 00:12:45.053 "trtype": "TCP" 00:12:45.053 }, 00:12:45.053 "qid": 0, 00:12:45.053 "state": "enabled", 00:12:45.053 "thread": "nvmf_tgt_poll_group_000" 00:12:45.053 } 00:12:45.053 ]' 00:12:45.053 11:31:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:45.053 11:31:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:45.053 11:31:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:45.053 11:31:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:12:45.053 11:31:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:45.053 11:31:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:45.053 11:31:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:45.053 11:31:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:45.622 11:31:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:891080d4-f96c-4735-b9e2-e3ce9892e421 --hostid 891080d4-f96c-4735-b9e2-e3ce9892e421 --dhchap-secret DHHC-1:01:MmIwMWM1NDY3MjE4MDgyYTg5ZDBjNzNhMmYwMzMyNzOMU4pH: --dhchap-ctrl-secret DHHC-1:02:YzllYzUxMzhlM2ZlMzcxZjNlM2M5MzRmOTgzYWEwZWI1YjliZmY5ZmNlNzNkY2Ix+7bkgA==: 00:12:46.189 11:31:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:46.189 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:46.189 11:31:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:891080d4-f96c-4735-b9e2-e3ce9892e421 00:12:46.189 11:31:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:46.189 11:31:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:46.189 11:31:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:46.189 11:31:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:46.189 11:31:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:12:46.189 11:31:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:12:46.447 11:31:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 2 00:12:46.447 11:31:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:46.447 11:31:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:12:46.447 11:31:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:12:46.447 11:31:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:12:46.447 11:31:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:46.447 11:31:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:891080d4-f96c-4735-b9e2-e3ce9892e421 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:46.447 11:31:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:46.447 11:31:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:46.447 11:31:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:46.448 11:31:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:891080d4-f96c-4735-b9e2-e3ce9892e421 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:46.448 11:31:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:891080d4-f96c-4735-b9e2-e3ce9892e421 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:47.014 00:12:47.014 11:31:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:47.014 11:31:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:47.014 11:31:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:47.014 11:31:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:47.014 11:31:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:47.014 11:31:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:47.014 11:31:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:47.014 11:31:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:47.015 11:31:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:47.015 { 00:12:47.015 "auth": { 00:12:47.015 "dhgroup": "ffdhe2048", 00:12:47.015 "digest": "sha384", 00:12:47.015 "state": "completed" 00:12:47.015 }, 00:12:47.015 "cntlid": 61, 00:12:47.015 "listen_address": { 00:12:47.015 "adrfam": "IPv4", 00:12:47.015 "traddr": "10.0.0.2", 00:12:47.015 "trsvcid": "4420", 00:12:47.015 "trtype": "TCP" 00:12:47.015 }, 00:12:47.015 "peer_address": { 00:12:47.015 "adrfam": "IPv4", 00:12:47.015 "traddr": "10.0.0.1", 00:12:47.015 "trsvcid": "37484", 00:12:47.015 "trtype": "TCP" 00:12:47.015 }, 00:12:47.015 "qid": 0, 00:12:47.015 "state": "enabled", 00:12:47.015 "thread": "nvmf_tgt_poll_group_000" 00:12:47.015 } 00:12:47.015 ]' 00:12:47.015 11:31:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:47.274 11:31:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:47.274 11:31:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:47.274 11:31:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:12:47.274 11:31:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:47.274 11:31:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:47.274 11:31:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:47.274 11:31:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:47.532 11:31:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:891080d4-f96c-4735-b9e2-e3ce9892e421 --hostid 891080d4-f96c-4735-b9e2-e3ce9892e421 --dhchap-secret DHHC-1:02:YmIzM2Q4ZGM4NmJlN2Y1YjVhMjhiZDJiYzAxOGQ4ODAyMzY1Zjc3NzBjZjAwYTVlTWVtCA==: --dhchap-ctrl-secret DHHC-1:01:ZjVjY2UxZDE0MjJiNThiOWE2NTQ0Zjg3MzljNGIwOGEwfG0l: 00:12:48.466 11:31:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:48.466 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:48.466 11:31:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:891080d4-f96c-4735-b9e2-e3ce9892e421 00:12:48.466 11:31:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:48.466 11:31:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:48.466 11:31:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:48.466 11:31:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:48.466 11:31:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:12:48.466 11:31:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:12:48.466 11:31:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 3 00:12:48.466 11:31:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:48.466 11:31:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:12:48.466 11:31:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:12:48.466 11:31:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:12:48.466 11:31:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:48.466 11:31:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:891080d4-f96c-4735-b9e2-e3ce9892e421 --dhchap-key key3 00:12:48.466 11:31:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:48.466 11:31:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:48.466 11:31:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:48.467 11:31:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:891080d4-f96c-4735-b9e2-e3ce9892e421 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:48.467 11:31:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:891080d4-f96c-4735-b9e2-e3ce9892e421 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:49.049 00:12:49.049 11:31:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:49.049 11:31:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:49.049 11:31:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:49.310 11:31:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:49.310 11:31:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:49.310 11:31:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:49.310 11:31:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:49.310 11:31:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:49.310 11:31:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:49.310 { 00:12:49.310 "auth": { 00:12:49.310 "dhgroup": "ffdhe2048", 00:12:49.310 "digest": "sha384", 00:12:49.310 "state": "completed" 00:12:49.310 }, 00:12:49.310 "cntlid": 63, 00:12:49.310 "listen_address": { 00:12:49.310 "adrfam": "IPv4", 00:12:49.310 "traddr": "10.0.0.2", 00:12:49.310 "trsvcid": "4420", 00:12:49.310 "trtype": "TCP" 00:12:49.310 }, 00:12:49.310 "peer_address": { 00:12:49.310 "adrfam": "IPv4", 00:12:49.310 "traddr": "10.0.0.1", 00:12:49.310 "trsvcid": "37522", 00:12:49.310 "trtype": "TCP" 00:12:49.310 }, 00:12:49.310 "qid": 0, 00:12:49.310 "state": "enabled", 00:12:49.310 "thread": "nvmf_tgt_poll_group_000" 00:12:49.310 } 00:12:49.310 ]' 00:12:49.310 11:31:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:49.310 11:31:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:49.310 11:31:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:49.310 11:31:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:12:49.310 11:31:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:49.568 11:31:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:49.568 11:31:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:49.568 11:31:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:49.828 11:31:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:891080d4-f96c-4735-b9e2-e3ce9892e421 --hostid 891080d4-f96c-4735-b9e2-e3ce9892e421 --dhchap-secret DHHC-1:03:MmM4Nzg3ODA1ZTJjY2Y2NDBlMDYxNGIwOGFlYzU4MWM3MTFmYTA3NmM5MTFiZDUzMWMzNjE2MmE1MWY0ODlmOHJ+gMM=: 00:12:50.396 11:31:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:50.396 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:50.396 11:31:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:891080d4-f96c-4735-b9e2-e3ce9892e421 00:12:50.396 11:31:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:50.396 11:31:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:50.396 11:31:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:50.396 11:31:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:12:50.396 11:31:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:50.396 11:31:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:12:50.396 11:31:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:12:50.655 11:31:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 0 00:12:50.655 11:31:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:50.655 11:31:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:12:50.655 11:31:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:12:50.655 11:31:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:12:50.655 11:31:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:50.655 11:31:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:891080d4-f96c-4735-b9e2-e3ce9892e421 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:50.655 11:31:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:50.655 11:31:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:50.655 11:31:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:50.655 11:31:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:891080d4-f96c-4735-b9e2-e3ce9892e421 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:50.655 11:31:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:891080d4-f96c-4735-b9e2-e3ce9892e421 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:50.915 00:12:50.915 11:31:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:50.915 11:31:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:50.915 11:31:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:51.482 11:31:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:51.482 11:31:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:51.482 11:31:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:51.482 11:31:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:51.483 11:31:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:51.483 11:31:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:51.483 { 00:12:51.483 "auth": { 00:12:51.483 "dhgroup": "ffdhe3072", 00:12:51.483 "digest": "sha384", 00:12:51.483 "state": "completed" 00:12:51.483 }, 00:12:51.483 "cntlid": 65, 00:12:51.483 "listen_address": { 00:12:51.483 "adrfam": "IPv4", 00:12:51.483 "traddr": "10.0.0.2", 00:12:51.483 "trsvcid": "4420", 00:12:51.483 "trtype": "TCP" 00:12:51.483 }, 00:12:51.483 "peer_address": { 00:12:51.483 "adrfam": "IPv4", 00:12:51.483 "traddr": "10.0.0.1", 00:12:51.483 "trsvcid": "37556", 00:12:51.483 "trtype": "TCP" 00:12:51.483 }, 00:12:51.483 "qid": 0, 00:12:51.483 "state": "enabled", 00:12:51.483 "thread": "nvmf_tgt_poll_group_000" 00:12:51.483 } 00:12:51.483 ]' 00:12:51.483 11:31:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:51.483 11:31:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:51.483 11:31:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:51.483 11:31:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:12:51.483 11:31:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:51.483 11:31:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:51.483 11:31:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:51.483 11:31:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:51.741 11:31:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:891080d4-f96c-4735-b9e2-e3ce9892e421 --hostid 891080d4-f96c-4735-b9e2-e3ce9892e421 --dhchap-secret DHHC-1:00:Mzk0NjEzMGY2NDlmM2VkY2EyMDk0MmE3ZWNlYjJhM2Q3OTg4MTYxNDZiMWE4ZDFmyEbbPA==: --dhchap-ctrl-secret DHHC-1:03:NjY4ZjdkMzE5NDgxMzg4MDNkNjkyMTZmODMyYjc0NjRkMDJhODY4NGFjNDU0ZDU3YTY5ZDdhNjBiYjZiZDBjYt73KVU=: 00:12:52.677 11:31:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:52.677 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:52.677 11:31:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:891080d4-f96c-4735-b9e2-e3ce9892e421 00:12:52.677 11:31:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:52.677 11:31:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:52.677 11:31:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:52.677 11:31:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:52.677 11:31:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:12:52.677 11:31:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:12:52.677 11:31:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 1 00:12:52.677 11:31:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:52.677 11:31:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:12:52.677 11:31:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:12:52.677 11:31:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:12:52.677 11:31:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:52.677 11:31:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:891080d4-f96c-4735-b9e2-e3ce9892e421 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:52.677 11:31:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:52.677 11:31:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:52.677 11:31:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:52.677 11:31:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:891080d4-f96c-4735-b9e2-e3ce9892e421 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:52.677 11:31:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:891080d4-f96c-4735-b9e2-e3ce9892e421 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:53.243 00:12:53.243 11:31:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:53.243 11:31:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:53.243 11:31:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:53.502 11:31:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:53.502 11:31:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:53.502 11:31:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:53.502 11:31:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:53.502 11:31:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:53.502 11:31:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:53.502 { 00:12:53.502 "auth": { 00:12:53.502 "dhgroup": "ffdhe3072", 00:12:53.502 "digest": "sha384", 00:12:53.502 "state": "completed" 00:12:53.502 }, 00:12:53.502 "cntlid": 67, 00:12:53.502 "listen_address": { 00:12:53.502 "adrfam": "IPv4", 00:12:53.502 "traddr": "10.0.0.2", 00:12:53.502 "trsvcid": "4420", 00:12:53.502 "trtype": "TCP" 00:12:53.502 }, 00:12:53.502 "peer_address": { 00:12:53.502 "adrfam": "IPv4", 00:12:53.502 "traddr": "10.0.0.1", 00:12:53.502 "trsvcid": "37294", 00:12:53.502 "trtype": "TCP" 00:12:53.502 }, 00:12:53.502 "qid": 0, 00:12:53.502 "state": "enabled", 00:12:53.502 "thread": "nvmf_tgt_poll_group_000" 00:12:53.502 } 00:12:53.502 ]' 00:12:53.502 11:31:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:53.502 11:31:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:53.502 11:31:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:53.502 11:31:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:12:53.502 11:31:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:53.502 11:31:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:53.502 11:31:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:53.502 11:31:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:54.066 11:31:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:891080d4-f96c-4735-b9e2-e3ce9892e421 --hostid 891080d4-f96c-4735-b9e2-e3ce9892e421 --dhchap-secret DHHC-1:01:MmIwMWM1NDY3MjE4MDgyYTg5ZDBjNzNhMmYwMzMyNzOMU4pH: --dhchap-ctrl-secret DHHC-1:02:YzllYzUxMzhlM2ZlMzcxZjNlM2M5MzRmOTgzYWEwZWI1YjliZmY5ZmNlNzNkY2Ix+7bkgA==: 00:12:54.629 11:31:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:54.629 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:54.629 11:31:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:891080d4-f96c-4735-b9e2-e3ce9892e421 00:12:54.629 11:31:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:54.629 11:31:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:54.629 11:31:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:54.629 11:31:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:54.629 11:31:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:12:54.629 11:31:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:12:54.904 11:31:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 2 00:12:54.904 11:31:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:54.904 11:31:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:12:54.904 11:31:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:12:54.904 11:31:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:12:54.904 11:31:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:54.904 11:31:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:891080d4-f96c-4735-b9e2-e3ce9892e421 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:54.904 11:31:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:54.904 11:31:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:54.904 11:31:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:54.904 11:31:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:891080d4-f96c-4735-b9e2-e3ce9892e421 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:54.904 11:31:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:891080d4-f96c-4735-b9e2-e3ce9892e421 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:55.489 00:12:55.489 11:31:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:55.489 11:31:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:55.489 11:31:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:55.747 11:31:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:55.747 11:31:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:55.747 11:31:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:55.747 11:31:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:55.747 11:31:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:55.747 11:31:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:55.748 { 00:12:55.748 "auth": { 00:12:55.748 "dhgroup": "ffdhe3072", 00:12:55.748 "digest": "sha384", 00:12:55.748 "state": "completed" 00:12:55.748 }, 00:12:55.748 "cntlid": 69, 00:12:55.748 "listen_address": { 00:12:55.748 "adrfam": "IPv4", 00:12:55.748 "traddr": "10.0.0.2", 00:12:55.748 "trsvcid": "4420", 00:12:55.748 "trtype": "TCP" 00:12:55.748 }, 00:12:55.748 "peer_address": { 00:12:55.748 "adrfam": "IPv4", 00:12:55.748 "traddr": "10.0.0.1", 00:12:55.748 "trsvcid": "37318", 00:12:55.748 "trtype": "TCP" 00:12:55.748 }, 00:12:55.748 "qid": 0, 00:12:55.748 "state": "enabled", 00:12:55.748 "thread": "nvmf_tgt_poll_group_000" 00:12:55.748 } 00:12:55.748 ]' 00:12:55.748 11:31:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:55.748 11:31:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:55.748 11:31:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:55.748 11:31:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:12:55.748 11:31:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:56.005 11:31:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:56.005 11:31:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:56.005 11:31:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:56.262 11:31:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:891080d4-f96c-4735-b9e2-e3ce9892e421 --hostid 891080d4-f96c-4735-b9e2-e3ce9892e421 --dhchap-secret DHHC-1:02:YmIzM2Q4ZGM4NmJlN2Y1YjVhMjhiZDJiYzAxOGQ4ODAyMzY1Zjc3NzBjZjAwYTVlTWVtCA==: --dhchap-ctrl-secret DHHC-1:01:ZjVjY2UxZDE0MjJiNThiOWE2NTQ0Zjg3MzljNGIwOGEwfG0l: 00:12:56.827 11:31:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:56.827 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:56.827 11:31:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:891080d4-f96c-4735-b9e2-e3ce9892e421 00:12:56.827 11:31:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:56.827 11:31:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:56.827 11:31:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:56.827 11:31:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:56.827 11:31:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:12:56.827 11:31:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:12:57.393 11:31:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 3 00:12:57.393 11:31:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:57.393 11:31:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:12:57.393 11:31:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:12:57.393 11:31:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:12:57.393 11:31:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:57.393 11:31:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:891080d4-f96c-4735-b9e2-e3ce9892e421 --dhchap-key key3 00:12:57.393 11:31:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:57.393 11:31:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:57.393 11:31:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:57.393 11:31:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:891080d4-f96c-4735-b9e2-e3ce9892e421 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:57.393 11:31:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:891080d4-f96c-4735-b9e2-e3ce9892e421 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:57.651 00:12:57.651 11:31:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:57.651 11:31:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:57.651 11:31:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:57.923 11:31:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:57.923 11:31:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:57.923 11:31:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:57.923 11:31:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:57.923 11:31:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:57.923 11:31:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:57.923 { 00:12:57.923 "auth": { 00:12:57.923 "dhgroup": "ffdhe3072", 00:12:57.923 "digest": "sha384", 00:12:57.923 "state": "completed" 00:12:57.923 }, 00:12:57.923 "cntlid": 71, 00:12:57.923 "listen_address": { 00:12:57.923 "adrfam": "IPv4", 00:12:57.923 "traddr": "10.0.0.2", 00:12:57.923 "trsvcid": "4420", 00:12:57.923 "trtype": "TCP" 00:12:57.923 }, 00:12:57.923 "peer_address": { 00:12:57.923 "adrfam": "IPv4", 00:12:57.923 "traddr": "10.0.0.1", 00:12:57.923 "trsvcid": "37350", 00:12:57.923 "trtype": "TCP" 00:12:57.923 }, 00:12:57.923 "qid": 0, 00:12:57.923 "state": "enabled", 00:12:57.923 "thread": "nvmf_tgt_poll_group_000" 00:12:57.923 } 00:12:57.923 ]' 00:12:57.923 11:31:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:57.923 11:31:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:57.923 11:31:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:57.923 11:31:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:12:57.923 11:31:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:58.182 11:31:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:58.182 11:31:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:58.182 11:31:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:58.439 11:31:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:891080d4-f96c-4735-b9e2-e3ce9892e421 --hostid 891080d4-f96c-4735-b9e2-e3ce9892e421 --dhchap-secret DHHC-1:03:MmM4Nzg3ODA1ZTJjY2Y2NDBlMDYxNGIwOGFlYzU4MWM3MTFmYTA3NmM5MTFiZDUzMWMzNjE2MmE1MWY0ODlmOHJ+gMM=: 00:12:59.006 11:31:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:59.006 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:59.006 11:31:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:891080d4-f96c-4735-b9e2-e3ce9892e421 00:12:59.006 11:31:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:59.006 11:31:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:59.006 11:31:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:59.006 11:31:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:12:59.006 11:31:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:59.006 11:31:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:12:59.006 11:31:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:12:59.264 11:31:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 0 00:12:59.264 11:31:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:59.264 11:31:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:12:59.264 11:31:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:12:59.264 11:31:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:12:59.264 11:31:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:59.264 11:31:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:891080d4-f96c-4735-b9e2-e3ce9892e421 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:59.264 11:31:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:59.264 11:31:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:59.264 11:31:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:59.264 11:31:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:891080d4-f96c-4735-b9e2-e3ce9892e421 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:59.264 11:31:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:891080d4-f96c-4735-b9e2-e3ce9892e421 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:59.830 00:12:59.830 11:31:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:59.830 11:31:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:59.830 11:31:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:00.088 11:31:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:00.088 11:31:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:00.088 11:31:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:00.088 11:31:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:00.088 11:31:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:00.088 11:31:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:00.088 { 00:13:00.088 "auth": { 00:13:00.088 "dhgroup": "ffdhe4096", 00:13:00.088 "digest": "sha384", 00:13:00.088 "state": "completed" 00:13:00.088 }, 00:13:00.088 "cntlid": 73, 00:13:00.088 "listen_address": { 00:13:00.088 "adrfam": "IPv4", 00:13:00.088 "traddr": "10.0.0.2", 00:13:00.088 "trsvcid": "4420", 00:13:00.088 "trtype": "TCP" 00:13:00.088 }, 00:13:00.088 "peer_address": { 00:13:00.088 "adrfam": "IPv4", 00:13:00.088 "traddr": "10.0.0.1", 00:13:00.088 "trsvcid": "37378", 00:13:00.088 "trtype": "TCP" 00:13:00.088 }, 00:13:00.088 "qid": 0, 00:13:00.088 "state": "enabled", 00:13:00.088 "thread": "nvmf_tgt_poll_group_000" 00:13:00.088 } 00:13:00.088 ]' 00:13:00.088 11:31:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:00.088 11:31:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:00.088 11:31:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:00.088 11:31:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:13:00.088 11:31:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:00.088 11:31:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:00.088 11:31:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:00.088 11:31:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:00.346 11:31:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:891080d4-f96c-4735-b9e2-e3ce9892e421 --hostid 891080d4-f96c-4735-b9e2-e3ce9892e421 --dhchap-secret DHHC-1:00:Mzk0NjEzMGY2NDlmM2VkY2EyMDk0MmE3ZWNlYjJhM2Q3OTg4MTYxNDZiMWE4ZDFmyEbbPA==: --dhchap-ctrl-secret DHHC-1:03:NjY4ZjdkMzE5NDgxMzg4MDNkNjkyMTZmODMyYjc0NjRkMDJhODY4NGFjNDU0ZDU3YTY5ZDdhNjBiYjZiZDBjYt73KVU=: 00:13:01.280 11:31:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:01.280 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:01.280 11:31:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:891080d4-f96c-4735-b9e2-e3ce9892e421 00:13:01.280 11:31:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:01.280 11:31:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:01.280 11:31:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:01.280 11:31:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:01.280 11:31:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:13:01.280 11:31:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:13:01.280 11:31:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 1 00:13:01.280 11:31:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:01.280 11:31:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:13:01.280 11:31:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:13:01.280 11:31:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:13:01.280 11:31:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:01.280 11:31:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:891080d4-f96c-4735-b9e2-e3ce9892e421 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:01.280 11:31:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:01.280 11:31:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:01.280 11:31:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:01.280 11:31:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:891080d4-f96c-4735-b9e2-e3ce9892e421 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:01.280 11:31:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:891080d4-f96c-4735-b9e2-e3ce9892e421 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:01.869 00:13:01.869 11:31:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:01.869 11:31:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:01.869 11:31:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:02.126 11:31:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:02.126 11:31:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:02.126 11:31:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:02.126 11:31:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:02.126 11:31:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:02.126 11:31:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:02.126 { 00:13:02.126 "auth": { 00:13:02.126 "dhgroup": "ffdhe4096", 00:13:02.126 "digest": "sha384", 00:13:02.126 "state": "completed" 00:13:02.126 }, 00:13:02.126 "cntlid": 75, 00:13:02.126 "listen_address": { 00:13:02.126 "adrfam": "IPv4", 00:13:02.126 "traddr": "10.0.0.2", 00:13:02.126 "trsvcid": "4420", 00:13:02.126 "trtype": "TCP" 00:13:02.126 }, 00:13:02.126 "peer_address": { 00:13:02.126 "adrfam": "IPv4", 00:13:02.126 "traddr": "10.0.0.1", 00:13:02.126 "trsvcid": "55234", 00:13:02.126 "trtype": "TCP" 00:13:02.126 }, 00:13:02.126 "qid": 0, 00:13:02.126 "state": "enabled", 00:13:02.126 "thread": "nvmf_tgt_poll_group_000" 00:13:02.126 } 00:13:02.126 ]' 00:13:02.126 11:31:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:02.126 11:31:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:02.126 11:31:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:02.126 11:31:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:13:02.126 11:31:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:02.126 11:31:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:02.126 11:31:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:02.126 11:31:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:02.383 11:31:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:891080d4-f96c-4735-b9e2-e3ce9892e421 --hostid 891080d4-f96c-4735-b9e2-e3ce9892e421 --dhchap-secret DHHC-1:01:MmIwMWM1NDY3MjE4MDgyYTg5ZDBjNzNhMmYwMzMyNzOMU4pH: --dhchap-ctrl-secret DHHC-1:02:YzllYzUxMzhlM2ZlMzcxZjNlM2M5MzRmOTgzYWEwZWI1YjliZmY5ZmNlNzNkY2Ix+7bkgA==: 00:13:03.319 11:31:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:03.319 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:03.319 11:31:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:891080d4-f96c-4735-b9e2-e3ce9892e421 00:13:03.319 11:31:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:03.319 11:31:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:03.319 11:31:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:03.319 11:31:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:03.319 11:31:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:13:03.319 11:31:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:13:03.577 11:31:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 2 00:13:03.577 11:31:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:03.577 11:31:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:13:03.577 11:31:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:13:03.577 11:31:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:13:03.577 11:31:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:03.577 11:31:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:891080d4-f96c-4735-b9e2-e3ce9892e421 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:03.577 11:31:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:03.577 11:31:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:03.577 11:31:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:03.577 11:31:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:891080d4-f96c-4735-b9e2-e3ce9892e421 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:03.577 11:31:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:891080d4-f96c-4735-b9e2-e3ce9892e421 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:03.835 00:13:03.835 11:31:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:03.835 11:31:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:03.835 11:31:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:04.094 11:31:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:04.094 11:31:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:04.094 11:31:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:04.094 11:31:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:04.094 11:31:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:04.094 11:31:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:04.094 { 00:13:04.094 "auth": { 00:13:04.094 "dhgroup": "ffdhe4096", 00:13:04.094 "digest": "sha384", 00:13:04.094 "state": "completed" 00:13:04.094 }, 00:13:04.094 "cntlid": 77, 00:13:04.094 "listen_address": { 00:13:04.094 "adrfam": "IPv4", 00:13:04.094 "traddr": "10.0.0.2", 00:13:04.094 "trsvcid": "4420", 00:13:04.094 "trtype": "TCP" 00:13:04.094 }, 00:13:04.094 "peer_address": { 00:13:04.094 "adrfam": "IPv4", 00:13:04.094 "traddr": "10.0.0.1", 00:13:04.094 "trsvcid": "55248", 00:13:04.094 "trtype": "TCP" 00:13:04.094 }, 00:13:04.094 "qid": 0, 00:13:04.094 "state": "enabled", 00:13:04.094 "thread": "nvmf_tgt_poll_group_000" 00:13:04.094 } 00:13:04.094 ]' 00:13:04.094 11:31:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:04.352 11:31:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:04.352 11:31:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:04.352 11:31:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:13:04.352 11:31:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:04.352 11:31:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:04.352 11:31:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:04.352 11:31:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:04.610 11:31:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:891080d4-f96c-4735-b9e2-e3ce9892e421 --hostid 891080d4-f96c-4735-b9e2-e3ce9892e421 --dhchap-secret DHHC-1:02:YmIzM2Q4ZGM4NmJlN2Y1YjVhMjhiZDJiYzAxOGQ4ODAyMzY1Zjc3NzBjZjAwYTVlTWVtCA==: --dhchap-ctrl-secret DHHC-1:01:ZjVjY2UxZDE0MjJiNThiOWE2NTQ0Zjg3MzljNGIwOGEwfG0l: 00:13:05.176 11:31:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:05.176 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:05.176 11:31:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:891080d4-f96c-4735-b9e2-e3ce9892e421 00:13:05.176 11:31:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:05.176 11:31:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:05.176 11:31:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:05.176 11:31:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:05.176 11:31:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:13:05.176 11:31:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:13:05.434 11:31:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 3 00:13:05.435 11:31:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:05.435 11:31:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:13:05.435 11:31:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:13:05.435 11:31:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:13:05.435 11:31:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:05.435 11:31:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:891080d4-f96c-4735-b9e2-e3ce9892e421 --dhchap-key key3 00:13:05.435 11:31:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:05.435 11:31:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:05.435 11:31:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:05.435 11:31:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:891080d4-f96c-4735-b9e2-e3ce9892e421 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:13:05.435 11:31:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:891080d4-f96c-4735-b9e2-e3ce9892e421 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:13:06.000 00:13:06.000 11:31:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:06.000 11:31:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:06.000 11:31:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:06.257 11:31:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:06.257 11:31:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:06.257 11:31:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:06.257 11:31:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:06.257 11:31:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:06.257 11:31:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:06.257 { 00:13:06.257 "auth": { 00:13:06.257 "dhgroup": "ffdhe4096", 00:13:06.257 "digest": "sha384", 00:13:06.257 "state": "completed" 00:13:06.257 }, 00:13:06.257 "cntlid": 79, 00:13:06.257 "listen_address": { 00:13:06.257 "adrfam": "IPv4", 00:13:06.257 "traddr": "10.0.0.2", 00:13:06.257 "trsvcid": "4420", 00:13:06.257 "trtype": "TCP" 00:13:06.257 }, 00:13:06.257 "peer_address": { 00:13:06.257 "adrfam": "IPv4", 00:13:06.257 "traddr": "10.0.0.1", 00:13:06.257 "trsvcid": "55276", 00:13:06.257 "trtype": "TCP" 00:13:06.257 }, 00:13:06.257 "qid": 0, 00:13:06.257 "state": "enabled", 00:13:06.257 "thread": "nvmf_tgt_poll_group_000" 00:13:06.257 } 00:13:06.257 ]' 00:13:06.257 11:31:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:06.257 11:31:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:06.257 11:31:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:06.257 11:31:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:13:06.257 11:31:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:06.516 11:31:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:06.516 11:31:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:06.516 11:31:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:06.775 11:31:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:891080d4-f96c-4735-b9e2-e3ce9892e421 --hostid 891080d4-f96c-4735-b9e2-e3ce9892e421 --dhchap-secret DHHC-1:03:MmM4Nzg3ODA1ZTJjY2Y2NDBlMDYxNGIwOGFlYzU4MWM3MTFmYTA3NmM5MTFiZDUzMWMzNjE2MmE1MWY0ODlmOHJ+gMM=: 00:13:07.341 11:31:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:07.341 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:07.341 11:31:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:891080d4-f96c-4735-b9e2-e3ce9892e421 00:13:07.341 11:31:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:07.341 11:31:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:07.341 11:31:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:07.341 11:31:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:13:07.341 11:31:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:07.341 11:31:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:13:07.341 11:31:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:13:07.599 11:31:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 0 00:13:07.599 11:31:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:07.599 11:31:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:13:07.599 11:31:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:13:07.599 11:31:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:13:07.599 11:31:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:07.599 11:31:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:891080d4-f96c-4735-b9e2-e3ce9892e421 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:07.599 11:31:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:07.599 11:31:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:07.599 11:31:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:07.599 11:31:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:891080d4-f96c-4735-b9e2-e3ce9892e421 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:07.599 11:31:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:891080d4-f96c-4735-b9e2-e3ce9892e421 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:08.165 00:13:08.165 11:31:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:08.165 11:31:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:08.165 11:31:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:08.424 11:31:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:08.424 11:31:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:08.424 11:31:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:08.424 11:31:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:08.424 11:31:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:08.424 11:31:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:08.424 { 00:13:08.424 "auth": { 00:13:08.424 "dhgroup": "ffdhe6144", 00:13:08.424 "digest": "sha384", 00:13:08.424 "state": "completed" 00:13:08.424 }, 00:13:08.424 "cntlid": 81, 00:13:08.424 "listen_address": { 00:13:08.424 "adrfam": "IPv4", 00:13:08.424 "traddr": "10.0.0.2", 00:13:08.424 "trsvcid": "4420", 00:13:08.424 "trtype": "TCP" 00:13:08.424 }, 00:13:08.424 "peer_address": { 00:13:08.424 "adrfam": "IPv4", 00:13:08.424 "traddr": "10.0.0.1", 00:13:08.424 "trsvcid": "55290", 00:13:08.424 "trtype": "TCP" 00:13:08.424 }, 00:13:08.424 "qid": 0, 00:13:08.424 "state": "enabled", 00:13:08.424 "thread": "nvmf_tgt_poll_group_000" 00:13:08.424 } 00:13:08.424 ]' 00:13:08.424 11:31:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:08.424 11:31:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:08.424 11:31:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:08.424 11:31:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:13:08.424 11:31:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:08.424 11:31:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:08.424 11:31:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:08.424 11:31:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:09.015 11:31:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:891080d4-f96c-4735-b9e2-e3ce9892e421 --hostid 891080d4-f96c-4735-b9e2-e3ce9892e421 --dhchap-secret DHHC-1:00:Mzk0NjEzMGY2NDlmM2VkY2EyMDk0MmE3ZWNlYjJhM2Q3OTg4MTYxNDZiMWE4ZDFmyEbbPA==: --dhchap-ctrl-secret DHHC-1:03:NjY4ZjdkMzE5NDgxMzg4MDNkNjkyMTZmODMyYjc0NjRkMDJhODY4NGFjNDU0ZDU3YTY5ZDdhNjBiYjZiZDBjYt73KVU=: 00:13:09.583 11:31:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:09.583 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:09.583 11:31:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:891080d4-f96c-4735-b9e2-e3ce9892e421 00:13:09.583 11:31:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:09.583 11:31:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:09.583 11:31:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:09.583 11:31:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:09.583 11:31:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:13:09.583 11:31:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:13:09.841 11:31:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 1 00:13:09.841 11:31:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:09.841 11:31:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:13:09.841 11:31:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:13:09.841 11:31:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:13:09.841 11:31:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:09.841 11:31:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:891080d4-f96c-4735-b9e2-e3ce9892e421 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:09.841 11:31:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:09.841 11:31:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:10.098 11:31:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:10.099 11:31:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:891080d4-f96c-4735-b9e2-e3ce9892e421 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:10.099 11:31:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:891080d4-f96c-4735-b9e2-e3ce9892e421 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:10.356 00:13:10.615 11:31:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:10.615 11:31:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:10.615 11:31:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:10.888 11:31:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:10.888 11:31:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:10.888 11:31:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:10.888 11:31:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:10.888 11:31:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:10.888 11:31:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:10.888 { 00:13:10.888 "auth": { 00:13:10.888 "dhgroup": "ffdhe6144", 00:13:10.888 "digest": "sha384", 00:13:10.888 "state": "completed" 00:13:10.888 }, 00:13:10.888 "cntlid": 83, 00:13:10.888 "listen_address": { 00:13:10.888 "adrfam": "IPv4", 00:13:10.888 "traddr": "10.0.0.2", 00:13:10.888 "trsvcid": "4420", 00:13:10.888 "trtype": "TCP" 00:13:10.888 }, 00:13:10.888 "peer_address": { 00:13:10.888 "adrfam": "IPv4", 00:13:10.888 "traddr": "10.0.0.1", 00:13:10.888 "trsvcid": "55324", 00:13:10.888 "trtype": "TCP" 00:13:10.888 }, 00:13:10.888 "qid": 0, 00:13:10.888 "state": "enabled", 00:13:10.888 "thread": "nvmf_tgt_poll_group_000" 00:13:10.888 } 00:13:10.889 ]' 00:13:10.889 11:31:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:10.889 11:31:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:10.889 11:31:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:10.889 11:31:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:13:10.889 11:31:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:10.889 11:31:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:10.889 11:31:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:10.889 11:31:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:11.186 11:31:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:891080d4-f96c-4735-b9e2-e3ce9892e421 --hostid 891080d4-f96c-4735-b9e2-e3ce9892e421 --dhchap-secret DHHC-1:01:MmIwMWM1NDY3MjE4MDgyYTg5ZDBjNzNhMmYwMzMyNzOMU4pH: --dhchap-ctrl-secret DHHC-1:02:YzllYzUxMzhlM2ZlMzcxZjNlM2M5MzRmOTgzYWEwZWI1YjliZmY5ZmNlNzNkY2Ix+7bkgA==: 00:13:12.121 11:31:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:12.121 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:12.121 11:31:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:891080d4-f96c-4735-b9e2-e3ce9892e421 00:13:12.121 11:31:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:12.121 11:31:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:12.121 11:31:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:12.121 11:31:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:12.121 11:31:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:13:12.121 11:31:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:13:12.121 11:31:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 2 00:13:12.121 11:31:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:12.121 11:31:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:13:12.121 11:31:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:13:12.121 11:31:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:13:12.121 11:31:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:12.121 11:31:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:891080d4-f96c-4735-b9e2-e3ce9892e421 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:12.121 11:31:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:12.121 11:31:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:12.121 11:31:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:12.121 11:31:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:891080d4-f96c-4735-b9e2-e3ce9892e421 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:12.121 11:31:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:891080d4-f96c-4735-b9e2-e3ce9892e421 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:12.687 00:13:12.687 11:31:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:12.687 11:31:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:12.687 11:31:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:12.945 11:31:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:12.945 11:31:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:12.945 11:31:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:12.945 11:31:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:12.945 11:31:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:12.945 11:31:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:12.945 { 00:13:12.945 "auth": { 00:13:12.945 "dhgroup": "ffdhe6144", 00:13:12.945 "digest": "sha384", 00:13:12.945 "state": "completed" 00:13:12.945 }, 00:13:12.945 "cntlid": 85, 00:13:12.945 "listen_address": { 00:13:12.945 "adrfam": "IPv4", 00:13:12.945 "traddr": "10.0.0.2", 00:13:12.945 "trsvcid": "4420", 00:13:12.945 "trtype": "TCP" 00:13:12.945 }, 00:13:12.945 "peer_address": { 00:13:12.945 "adrfam": "IPv4", 00:13:12.945 "traddr": "10.0.0.1", 00:13:12.945 "trsvcid": "52268", 00:13:12.945 "trtype": "TCP" 00:13:12.945 }, 00:13:12.945 "qid": 0, 00:13:12.945 "state": "enabled", 00:13:12.945 "thread": "nvmf_tgt_poll_group_000" 00:13:12.945 } 00:13:12.945 ]' 00:13:12.945 11:31:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:13.202 11:31:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:13.202 11:31:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:13.202 11:31:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:13:13.202 11:31:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:13.202 11:31:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:13.202 11:31:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:13.202 11:31:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:13.458 11:31:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:891080d4-f96c-4735-b9e2-e3ce9892e421 --hostid 891080d4-f96c-4735-b9e2-e3ce9892e421 --dhchap-secret DHHC-1:02:YmIzM2Q4ZGM4NmJlN2Y1YjVhMjhiZDJiYzAxOGQ4ODAyMzY1Zjc3NzBjZjAwYTVlTWVtCA==: --dhchap-ctrl-secret DHHC-1:01:ZjVjY2UxZDE0MjJiNThiOWE2NTQ0Zjg3MzljNGIwOGEwfG0l: 00:13:14.391 11:31:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:14.391 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:14.391 11:31:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:891080d4-f96c-4735-b9e2-e3ce9892e421 00:13:14.391 11:31:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:14.391 11:31:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:14.391 11:31:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:14.391 11:31:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:14.391 11:31:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:13:14.391 11:31:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:13:14.391 11:31:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 3 00:13:14.391 11:31:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:14.391 11:31:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:13:14.391 11:31:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:13:14.391 11:31:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:13:14.391 11:31:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:14.391 11:31:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:891080d4-f96c-4735-b9e2-e3ce9892e421 --dhchap-key key3 00:13:14.391 11:31:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:14.391 11:31:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:14.391 11:31:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:14.391 11:31:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:891080d4-f96c-4735-b9e2-e3ce9892e421 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:13:14.391 11:31:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:891080d4-f96c-4735-b9e2-e3ce9892e421 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:13:14.954 00:13:14.954 11:31:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:14.954 11:31:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:14.954 11:31:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:15.212 11:31:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:15.212 11:31:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:15.212 11:31:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:15.212 11:31:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:15.212 11:31:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:15.212 11:31:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:15.212 { 00:13:15.212 "auth": { 00:13:15.212 "dhgroup": "ffdhe6144", 00:13:15.212 "digest": "sha384", 00:13:15.212 "state": "completed" 00:13:15.212 }, 00:13:15.212 "cntlid": 87, 00:13:15.212 "listen_address": { 00:13:15.212 "adrfam": "IPv4", 00:13:15.212 "traddr": "10.0.0.2", 00:13:15.212 "trsvcid": "4420", 00:13:15.212 "trtype": "TCP" 00:13:15.212 }, 00:13:15.213 "peer_address": { 00:13:15.213 "adrfam": "IPv4", 00:13:15.213 "traddr": "10.0.0.1", 00:13:15.213 "trsvcid": "52296", 00:13:15.213 "trtype": "TCP" 00:13:15.213 }, 00:13:15.213 "qid": 0, 00:13:15.213 "state": "enabled", 00:13:15.213 "thread": "nvmf_tgt_poll_group_000" 00:13:15.213 } 00:13:15.213 ]' 00:13:15.213 11:31:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:15.213 11:31:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:15.213 11:31:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:15.470 11:31:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:13:15.470 11:31:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:15.470 11:31:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:15.470 11:31:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:15.470 11:31:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:15.728 11:31:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:891080d4-f96c-4735-b9e2-e3ce9892e421 --hostid 891080d4-f96c-4735-b9e2-e3ce9892e421 --dhchap-secret DHHC-1:03:MmM4Nzg3ODA1ZTJjY2Y2NDBlMDYxNGIwOGFlYzU4MWM3MTFmYTA3NmM5MTFiZDUzMWMzNjE2MmE1MWY0ODlmOHJ+gMM=: 00:13:16.659 11:31:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:16.659 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:16.659 11:31:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:891080d4-f96c-4735-b9e2-e3ce9892e421 00:13:16.659 11:31:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:16.659 11:31:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:16.659 11:31:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:16.659 11:31:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:13:16.659 11:31:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:16.659 11:31:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:13:16.659 11:31:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:13:16.659 11:31:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 0 00:13:16.659 11:31:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:16.659 11:31:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:13:16.659 11:31:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:13:16.659 11:31:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:13:16.659 11:31:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:16.659 11:31:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:891080d4-f96c-4735-b9e2-e3ce9892e421 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:16.659 11:31:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:16.659 11:31:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:16.659 11:31:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:16.659 11:31:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:891080d4-f96c-4735-b9e2-e3ce9892e421 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:16.659 11:31:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:891080d4-f96c-4735-b9e2-e3ce9892e421 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:17.590 00:13:17.590 11:31:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:17.590 11:31:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:17.590 11:31:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:17.590 11:31:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:17.590 11:31:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:17.590 11:31:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:17.590 11:31:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:17.846 11:31:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:17.846 11:31:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:17.846 { 00:13:17.846 "auth": { 00:13:17.846 "dhgroup": "ffdhe8192", 00:13:17.846 "digest": "sha384", 00:13:17.846 "state": "completed" 00:13:17.846 }, 00:13:17.846 "cntlid": 89, 00:13:17.846 "listen_address": { 00:13:17.846 "adrfam": "IPv4", 00:13:17.846 "traddr": "10.0.0.2", 00:13:17.846 "trsvcid": "4420", 00:13:17.846 "trtype": "TCP" 00:13:17.846 }, 00:13:17.846 "peer_address": { 00:13:17.846 "adrfam": "IPv4", 00:13:17.846 "traddr": "10.0.0.1", 00:13:17.846 "trsvcid": "52330", 00:13:17.846 "trtype": "TCP" 00:13:17.846 }, 00:13:17.846 "qid": 0, 00:13:17.846 "state": "enabled", 00:13:17.846 "thread": "nvmf_tgt_poll_group_000" 00:13:17.846 } 00:13:17.846 ]' 00:13:17.846 11:31:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:17.846 11:31:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:17.846 11:31:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:17.846 11:31:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:13:17.846 11:31:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:17.846 11:31:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:17.846 11:31:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:17.846 11:31:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:18.103 11:31:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:891080d4-f96c-4735-b9e2-e3ce9892e421 --hostid 891080d4-f96c-4735-b9e2-e3ce9892e421 --dhchap-secret DHHC-1:00:Mzk0NjEzMGY2NDlmM2VkY2EyMDk0MmE3ZWNlYjJhM2Q3OTg4MTYxNDZiMWE4ZDFmyEbbPA==: --dhchap-ctrl-secret DHHC-1:03:NjY4ZjdkMzE5NDgxMzg4MDNkNjkyMTZmODMyYjc0NjRkMDJhODY4NGFjNDU0ZDU3YTY5ZDdhNjBiYjZiZDBjYt73KVU=: 00:13:19.036 11:31:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:19.036 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:19.036 11:31:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:891080d4-f96c-4735-b9e2-e3ce9892e421 00:13:19.036 11:31:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:19.036 11:31:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:19.036 11:31:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:19.036 11:31:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:19.036 11:31:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:13:19.036 11:31:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:13:19.036 11:31:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 1 00:13:19.036 11:31:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:19.036 11:31:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:13:19.036 11:31:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:13:19.036 11:31:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:13:19.036 11:31:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:19.036 11:31:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:891080d4-f96c-4735-b9e2-e3ce9892e421 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:19.036 11:31:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:19.036 11:31:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:19.036 11:31:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:19.036 11:31:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:891080d4-f96c-4735-b9e2-e3ce9892e421 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:19.036 11:31:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:891080d4-f96c-4735-b9e2-e3ce9892e421 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:19.620 00:13:19.620 11:31:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:19.620 11:31:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:19.620 11:31:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:19.878 11:31:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:19.878 11:31:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:19.878 11:31:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:19.878 11:31:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:20.136 11:31:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:20.136 11:31:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:20.136 { 00:13:20.136 "auth": { 00:13:20.136 "dhgroup": "ffdhe8192", 00:13:20.136 "digest": "sha384", 00:13:20.136 "state": "completed" 00:13:20.136 }, 00:13:20.136 "cntlid": 91, 00:13:20.136 "listen_address": { 00:13:20.136 "adrfam": "IPv4", 00:13:20.136 "traddr": "10.0.0.2", 00:13:20.136 "trsvcid": "4420", 00:13:20.136 "trtype": "TCP" 00:13:20.136 }, 00:13:20.136 "peer_address": { 00:13:20.136 "adrfam": "IPv4", 00:13:20.136 "traddr": "10.0.0.1", 00:13:20.136 "trsvcid": "52350", 00:13:20.136 "trtype": "TCP" 00:13:20.136 }, 00:13:20.136 "qid": 0, 00:13:20.136 "state": "enabled", 00:13:20.136 "thread": "nvmf_tgt_poll_group_000" 00:13:20.136 } 00:13:20.136 ]' 00:13:20.136 11:31:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:20.136 11:31:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:20.136 11:31:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:20.136 11:31:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:13:20.136 11:31:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:20.136 11:31:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:20.136 11:31:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:20.136 11:31:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:20.394 11:31:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:891080d4-f96c-4735-b9e2-e3ce9892e421 --hostid 891080d4-f96c-4735-b9e2-e3ce9892e421 --dhchap-secret DHHC-1:01:MmIwMWM1NDY3MjE4MDgyYTg5ZDBjNzNhMmYwMzMyNzOMU4pH: --dhchap-ctrl-secret DHHC-1:02:YzllYzUxMzhlM2ZlMzcxZjNlM2M5MzRmOTgzYWEwZWI1YjliZmY5ZmNlNzNkY2Ix+7bkgA==: 00:13:21.328 11:31:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:21.328 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:21.328 11:31:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:891080d4-f96c-4735-b9e2-e3ce9892e421 00:13:21.328 11:31:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:21.328 11:31:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:21.328 11:31:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:21.328 11:31:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:21.328 11:31:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:13:21.328 11:31:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:13:21.586 11:31:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 2 00:13:21.587 11:31:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:21.587 11:31:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:13:21.587 11:31:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:13:21.587 11:31:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:13:21.587 11:31:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:21.587 11:31:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:891080d4-f96c-4735-b9e2-e3ce9892e421 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:21.587 11:31:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:21.587 11:31:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:21.587 11:31:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:21.587 11:31:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:891080d4-f96c-4735-b9e2-e3ce9892e421 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:21.587 11:31:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:891080d4-f96c-4735-b9e2-e3ce9892e421 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:22.153 00:13:22.153 11:31:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:22.153 11:31:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:22.153 11:31:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:22.412 11:31:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:22.412 11:31:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:22.412 11:31:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:22.412 11:31:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:22.412 11:31:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:22.412 11:31:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:22.412 { 00:13:22.412 "auth": { 00:13:22.412 "dhgroup": "ffdhe8192", 00:13:22.412 "digest": "sha384", 00:13:22.412 "state": "completed" 00:13:22.412 }, 00:13:22.412 "cntlid": 93, 00:13:22.412 "listen_address": { 00:13:22.412 "adrfam": "IPv4", 00:13:22.412 "traddr": "10.0.0.2", 00:13:22.412 "trsvcid": "4420", 00:13:22.412 "trtype": "TCP" 00:13:22.412 }, 00:13:22.412 "peer_address": { 00:13:22.412 "adrfam": "IPv4", 00:13:22.412 "traddr": "10.0.0.1", 00:13:22.412 "trsvcid": "50992", 00:13:22.412 "trtype": "TCP" 00:13:22.412 }, 00:13:22.412 "qid": 0, 00:13:22.412 "state": "enabled", 00:13:22.412 "thread": "nvmf_tgt_poll_group_000" 00:13:22.412 } 00:13:22.412 ]' 00:13:22.412 11:31:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:22.673 11:31:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:22.673 11:31:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:22.673 11:31:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:13:22.673 11:31:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:22.673 11:32:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:22.673 11:32:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:22.673 11:32:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:22.930 11:32:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:891080d4-f96c-4735-b9e2-e3ce9892e421 --hostid 891080d4-f96c-4735-b9e2-e3ce9892e421 --dhchap-secret DHHC-1:02:YmIzM2Q4ZGM4NmJlN2Y1YjVhMjhiZDJiYzAxOGQ4ODAyMzY1Zjc3NzBjZjAwYTVlTWVtCA==: --dhchap-ctrl-secret DHHC-1:01:ZjVjY2UxZDE0MjJiNThiOWE2NTQ0Zjg3MzljNGIwOGEwfG0l: 00:13:23.862 11:32:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:23.862 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:23.862 11:32:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:891080d4-f96c-4735-b9e2-e3ce9892e421 00:13:23.862 11:32:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:23.862 11:32:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:23.862 11:32:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:23.862 11:32:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:23.862 11:32:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:13:23.862 11:32:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:13:24.121 11:32:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 3 00:13:24.121 11:32:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:24.121 11:32:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:13:24.121 11:32:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:13:24.121 11:32:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:13:24.121 11:32:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:24.121 11:32:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:891080d4-f96c-4735-b9e2-e3ce9892e421 --dhchap-key key3 00:13:24.121 11:32:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:24.121 11:32:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:24.121 11:32:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:24.121 11:32:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:891080d4-f96c-4735-b9e2-e3ce9892e421 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:13:24.121 11:32:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:891080d4-f96c-4735-b9e2-e3ce9892e421 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:13:24.717 00:13:24.717 11:32:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:24.717 11:32:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:24.717 11:32:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:24.975 11:32:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:24.975 11:32:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:24.975 11:32:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:24.975 11:32:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:24.975 11:32:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:24.976 11:32:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:24.976 { 00:13:24.976 "auth": { 00:13:24.976 "dhgroup": "ffdhe8192", 00:13:24.976 "digest": "sha384", 00:13:24.976 "state": "completed" 00:13:24.976 }, 00:13:24.976 "cntlid": 95, 00:13:24.976 "listen_address": { 00:13:24.976 "adrfam": "IPv4", 00:13:24.976 "traddr": "10.0.0.2", 00:13:24.976 "trsvcid": "4420", 00:13:24.976 "trtype": "TCP" 00:13:24.976 }, 00:13:24.976 "peer_address": { 00:13:24.976 "adrfam": "IPv4", 00:13:24.976 "traddr": "10.0.0.1", 00:13:24.976 "trsvcid": "51010", 00:13:24.976 "trtype": "TCP" 00:13:24.976 }, 00:13:24.976 "qid": 0, 00:13:24.976 "state": "enabled", 00:13:24.976 "thread": "nvmf_tgt_poll_group_000" 00:13:24.976 } 00:13:24.976 ]' 00:13:24.976 11:32:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:24.976 11:32:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:24.976 11:32:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:25.233 11:32:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:13:25.233 11:32:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:25.233 11:32:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:25.233 11:32:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:25.233 11:32:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:25.491 11:32:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:891080d4-f96c-4735-b9e2-e3ce9892e421 --hostid 891080d4-f96c-4735-b9e2-e3ce9892e421 --dhchap-secret DHHC-1:03:MmM4Nzg3ODA1ZTJjY2Y2NDBlMDYxNGIwOGFlYzU4MWM3MTFmYTA3NmM5MTFiZDUzMWMzNjE2MmE1MWY0ODlmOHJ+gMM=: 00:13:26.056 11:32:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:26.056 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:26.056 11:32:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:891080d4-f96c-4735-b9e2-e3ce9892e421 00:13:26.056 11:32:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:26.056 11:32:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:26.313 11:32:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:26.313 11:32:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:13:26.313 11:32:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:13:26.313 11:32:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:26.313 11:32:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:13:26.313 11:32:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:13:26.571 11:32:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 0 00:13:26.571 11:32:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:26.571 11:32:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:13:26.571 11:32:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:13:26.571 11:32:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:13:26.571 11:32:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:26.571 11:32:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:891080d4-f96c-4735-b9e2-e3ce9892e421 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:26.571 11:32:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:26.571 11:32:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:26.571 11:32:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:26.571 11:32:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:891080d4-f96c-4735-b9e2-e3ce9892e421 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:26.571 11:32:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:891080d4-f96c-4735-b9e2-e3ce9892e421 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:26.828 00:13:26.828 11:32:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:26.828 11:32:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:26.828 11:32:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:27.086 11:32:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:27.086 11:32:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:27.086 11:32:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:27.086 11:32:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:27.086 11:32:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:27.086 11:32:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:27.086 { 00:13:27.086 "auth": { 00:13:27.086 "dhgroup": "null", 00:13:27.086 "digest": "sha512", 00:13:27.086 "state": "completed" 00:13:27.086 }, 00:13:27.086 "cntlid": 97, 00:13:27.086 "listen_address": { 00:13:27.086 "adrfam": "IPv4", 00:13:27.086 "traddr": "10.0.0.2", 00:13:27.086 "trsvcid": "4420", 00:13:27.086 "trtype": "TCP" 00:13:27.086 }, 00:13:27.086 "peer_address": { 00:13:27.086 "adrfam": "IPv4", 00:13:27.086 "traddr": "10.0.0.1", 00:13:27.086 "trsvcid": "51046", 00:13:27.086 "trtype": "TCP" 00:13:27.086 }, 00:13:27.086 "qid": 0, 00:13:27.086 "state": "enabled", 00:13:27.086 "thread": "nvmf_tgt_poll_group_000" 00:13:27.086 } 00:13:27.086 ]' 00:13:27.086 11:32:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:27.086 11:32:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:27.343 11:32:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:27.343 11:32:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:13:27.343 11:32:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:27.343 11:32:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:27.344 11:32:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:27.344 11:32:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:27.602 11:32:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:891080d4-f96c-4735-b9e2-e3ce9892e421 --hostid 891080d4-f96c-4735-b9e2-e3ce9892e421 --dhchap-secret DHHC-1:00:Mzk0NjEzMGY2NDlmM2VkY2EyMDk0MmE3ZWNlYjJhM2Q3OTg4MTYxNDZiMWE4ZDFmyEbbPA==: --dhchap-ctrl-secret DHHC-1:03:NjY4ZjdkMzE5NDgxMzg4MDNkNjkyMTZmODMyYjc0NjRkMDJhODY4NGFjNDU0ZDU3YTY5ZDdhNjBiYjZiZDBjYt73KVU=: 00:13:28.534 11:32:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:28.534 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:28.534 11:32:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:891080d4-f96c-4735-b9e2-e3ce9892e421 00:13:28.534 11:32:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:28.534 11:32:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:28.534 11:32:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:28.534 11:32:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:28.534 11:32:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:13:28.534 11:32:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:13:28.791 11:32:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 1 00:13:28.791 11:32:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:28.791 11:32:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:13:28.791 11:32:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:13:28.791 11:32:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:13:28.791 11:32:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:28.791 11:32:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:891080d4-f96c-4735-b9e2-e3ce9892e421 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:28.791 11:32:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:28.791 11:32:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:28.791 11:32:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:28.791 11:32:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:891080d4-f96c-4735-b9e2-e3ce9892e421 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:28.791 11:32:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:891080d4-f96c-4735-b9e2-e3ce9892e421 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:29.047 00:13:29.047 11:32:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:29.047 11:32:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:29.047 11:32:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:29.304 11:32:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:29.304 11:32:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:29.305 11:32:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:29.305 11:32:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:29.305 11:32:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:29.305 11:32:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:29.305 { 00:13:29.305 "auth": { 00:13:29.305 "dhgroup": "null", 00:13:29.305 "digest": "sha512", 00:13:29.305 "state": "completed" 00:13:29.305 }, 00:13:29.305 "cntlid": 99, 00:13:29.305 "listen_address": { 00:13:29.305 "adrfam": "IPv4", 00:13:29.305 "traddr": "10.0.0.2", 00:13:29.305 "trsvcid": "4420", 00:13:29.305 "trtype": "TCP" 00:13:29.305 }, 00:13:29.305 "peer_address": { 00:13:29.305 "adrfam": "IPv4", 00:13:29.305 "traddr": "10.0.0.1", 00:13:29.305 "trsvcid": "51064", 00:13:29.305 "trtype": "TCP" 00:13:29.305 }, 00:13:29.305 "qid": 0, 00:13:29.305 "state": "enabled", 00:13:29.305 "thread": "nvmf_tgt_poll_group_000" 00:13:29.305 } 00:13:29.305 ]' 00:13:29.305 11:32:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:29.305 11:32:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:29.305 11:32:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:29.305 11:32:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:13:29.305 11:32:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:29.561 11:32:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:29.561 11:32:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:29.561 11:32:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:29.819 11:32:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:891080d4-f96c-4735-b9e2-e3ce9892e421 --hostid 891080d4-f96c-4735-b9e2-e3ce9892e421 --dhchap-secret DHHC-1:01:MmIwMWM1NDY3MjE4MDgyYTg5ZDBjNzNhMmYwMzMyNzOMU4pH: --dhchap-ctrl-secret DHHC-1:02:YzllYzUxMzhlM2ZlMzcxZjNlM2M5MzRmOTgzYWEwZWI1YjliZmY5ZmNlNzNkY2Ix+7bkgA==: 00:13:30.382 11:32:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:30.382 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:30.382 11:32:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:891080d4-f96c-4735-b9e2-e3ce9892e421 00:13:30.382 11:32:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:30.382 11:32:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:30.382 11:32:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:30.382 11:32:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:30.382 11:32:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:13:30.382 11:32:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:13:30.639 11:32:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 2 00:13:30.639 11:32:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:30.639 11:32:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:13:30.639 11:32:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:13:30.639 11:32:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:13:30.639 11:32:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:30.639 11:32:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:891080d4-f96c-4735-b9e2-e3ce9892e421 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:30.639 11:32:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:30.639 11:32:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:30.639 11:32:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:30.639 11:32:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:891080d4-f96c-4735-b9e2-e3ce9892e421 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:30.639 11:32:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:891080d4-f96c-4735-b9e2-e3ce9892e421 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:30.897 00:13:30.897 11:32:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:30.897 11:32:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:30.897 11:32:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:31.462 11:32:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:31.463 11:32:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:31.463 11:32:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:31.463 11:32:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:31.463 11:32:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:31.463 11:32:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:31.463 { 00:13:31.463 "auth": { 00:13:31.463 "dhgroup": "null", 00:13:31.463 "digest": "sha512", 00:13:31.463 "state": "completed" 00:13:31.463 }, 00:13:31.463 "cntlid": 101, 00:13:31.463 "listen_address": { 00:13:31.463 "adrfam": "IPv4", 00:13:31.463 "traddr": "10.0.0.2", 00:13:31.463 "trsvcid": "4420", 00:13:31.463 "trtype": "TCP" 00:13:31.463 }, 00:13:31.463 "peer_address": { 00:13:31.463 "adrfam": "IPv4", 00:13:31.463 "traddr": "10.0.0.1", 00:13:31.463 "trsvcid": "51088", 00:13:31.463 "trtype": "TCP" 00:13:31.463 }, 00:13:31.463 "qid": 0, 00:13:31.463 "state": "enabled", 00:13:31.463 "thread": "nvmf_tgt_poll_group_000" 00:13:31.463 } 00:13:31.463 ]' 00:13:31.463 11:32:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:31.463 11:32:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:31.463 11:32:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:31.463 11:32:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:13:31.463 11:32:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:31.463 11:32:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:31.463 11:32:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:31.463 11:32:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:31.720 11:32:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:891080d4-f96c-4735-b9e2-e3ce9892e421 --hostid 891080d4-f96c-4735-b9e2-e3ce9892e421 --dhchap-secret DHHC-1:02:YmIzM2Q4ZGM4NmJlN2Y1YjVhMjhiZDJiYzAxOGQ4ODAyMzY1Zjc3NzBjZjAwYTVlTWVtCA==: --dhchap-ctrl-secret DHHC-1:01:ZjVjY2UxZDE0MjJiNThiOWE2NTQ0Zjg3MzljNGIwOGEwfG0l: 00:13:32.651 11:32:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:32.651 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:32.651 11:32:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:891080d4-f96c-4735-b9e2-e3ce9892e421 00:13:32.651 11:32:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:32.651 11:32:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:32.651 11:32:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:32.651 11:32:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:32.651 11:32:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:13:32.651 11:32:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:13:32.909 11:32:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 3 00:13:32.909 11:32:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:32.909 11:32:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:13:32.909 11:32:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:13:32.909 11:32:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:13:32.909 11:32:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:32.909 11:32:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:891080d4-f96c-4735-b9e2-e3ce9892e421 --dhchap-key key3 00:13:32.909 11:32:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:32.909 11:32:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:32.909 11:32:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:32.909 11:32:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:891080d4-f96c-4735-b9e2-e3ce9892e421 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:13:32.909 11:32:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:891080d4-f96c-4735-b9e2-e3ce9892e421 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:13:33.166 00:13:33.423 11:32:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:33.423 11:32:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:33.423 11:32:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:33.697 11:32:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:33.697 11:32:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:33.697 11:32:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:33.697 11:32:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:33.697 11:32:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:33.697 11:32:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:33.697 { 00:13:33.697 "auth": { 00:13:33.697 "dhgroup": "null", 00:13:33.697 "digest": "sha512", 00:13:33.697 "state": "completed" 00:13:33.697 }, 00:13:33.697 "cntlid": 103, 00:13:33.697 "listen_address": { 00:13:33.697 "adrfam": "IPv4", 00:13:33.697 "traddr": "10.0.0.2", 00:13:33.697 "trsvcid": "4420", 00:13:33.697 "trtype": "TCP" 00:13:33.697 }, 00:13:33.697 "peer_address": { 00:13:33.697 "adrfam": "IPv4", 00:13:33.697 "traddr": "10.0.0.1", 00:13:33.697 "trsvcid": "38134", 00:13:33.697 "trtype": "TCP" 00:13:33.697 }, 00:13:33.697 "qid": 0, 00:13:33.697 "state": "enabled", 00:13:33.697 "thread": "nvmf_tgt_poll_group_000" 00:13:33.697 } 00:13:33.697 ]' 00:13:33.697 11:32:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:33.697 11:32:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:33.697 11:32:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:33.697 11:32:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:13:33.697 11:32:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:33.697 11:32:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:33.697 11:32:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:33.697 11:32:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:34.267 11:32:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:891080d4-f96c-4735-b9e2-e3ce9892e421 --hostid 891080d4-f96c-4735-b9e2-e3ce9892e421 --dhchap-secret DHHC-1:03:MmM4Nzg3ODA1ZTJjY2Y2NDBlMDYxNGIwOGFlYzU4MWM3MTFmYTA3NmM5MTFiZDUzMWMzNjE2MmE1MWY0ODlmOHJ+gMM=: 00:13:34.880 11:32:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:34.880 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:34.880 11:32:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:891080d4-f96c-4735-b9e2-e3ce9892e421 00:13:34.880 11:32:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:34.880 11:32:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:34.880 11:32:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:34.880 11:32:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:13:34.880 11:32:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:34.880 11:32:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:13:34.880 11:32:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:13:35.138 11:32:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 0 00:13:35.138 11:32:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:35.138 11:32:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:13:35.138 11:32:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:13:35.138 11:32:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:13:35.138 11:32:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:35.138 11:32:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:891080d4-f96c-4735-b9e2-e3ce9892e421 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:35.138 11:32:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:35.138 11:32:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:35.138 11:32:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:35.138 11:32:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:891080d4-f96c-4735-b9e2-e3ce9892e421 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:35.138 11:32:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:891080d4-f96c-4735-b9e2-e3ce9892e421 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:35.702 00:13:35.702 11:32:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:35.702 11:32:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:35.702 11:32:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:35.961 11:32:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:35.961 11:32:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:35.961 11:32:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:35.961 11:32:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:35.961 11:32:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:35.961 11:32:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:35.961 { 00:13:35.961 "auth": { 00:13:35.961 "dhgroup": "ffdhe2048", 00:13:35.961 "digest": "sha512", 00:13:35.961 "state": "completed" 00:13:35.961 }, 00:13:35.961 "cntlid": 105, 00:13:35.961 "listen_address": { 00:13:35.961 "adrfam": "IPv4", 00:13:35.961 "traddr": "10.0.0.2", 00:13:35.961 "trsvcid": "4420", 00:13:35.961 "trtype": "TCP" 00:13:35.961 }, 00:13:35.961 "peer_address": { 00:13:35.961 "adrfam": "IPv4", 00:13:35.961 "traddr": "10.0.0.1", 00:13:35.961 "trsvcid": "38168", 00:13:35.961 "trtype": "TCP" 00:13:35.961 }, 00:13:35.961 "qid": 0, 00:13:35.961 "state": "enabled", 00:13:35.961 "thread": "nvmf_tgt_poll_group_000" 00:13:35.961 } 00:13:35.961 ]' 00:13:35.961 11:32:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:35.961 11:32:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:35.961 11:32:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:35.961 11:32:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:13:35.961 11:32:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:35.961 11:32:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:35.961 11:32:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:35.961 11:32:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:36.526 11:32:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:891080d4-f96c-4735-b9e2-e3ce9892e421 --hostid 891080d4-f96c-4735-b9e2-e3ce9892e421 --dhchap-secret DHHC-1:00:Mzk0NjEzMGY2NDlmM2VkY2EyMDk0MmE3ZWNlYjJhM2Q3OTg4MTYxNDZiMWE4ZDFmyEbbPA==: --dhchap-ctrl-secret DHHC-1:03:NjY4ZjdkMzE5NDgxMzg4MDNkNjkyMTZmODMyYjc0NjRkMDJhODY4NGFjNDU0ZDU3YTY5ZDdhNjBiYjZiZDBjYt73KVU=: 00:13:37.092 11:32:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:37.092 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:37.092 11:32:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:891080d4-f96c-4735-b9e2-e3ce9892e421 00:13:37.092 11:32:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:37.092 11:32:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:37.092 11:32:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:37.092 11:32:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:37.092 11:32:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:13:37.093 11:32:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:13:37.350 11:32:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 1 00:13:37.351 11:32:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:37.351 11:32:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:13:37.351 11:32:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:13:37.351 11:32:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:13:37.351 11:32:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:37.351 11:32:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:891080d4-f96c-4735-b9e2-e3ce9892e421 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:37.351 11:32:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:37.351 11:32:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:37.351 11:32:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:37.351 11:32:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:891080d4-f96c-4735-b9e2-e3ce9892e421 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:37.351 11:32:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:891080d4-f96c-4735-b9e2-e3ce9892e421 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:37.915 00:13:37.915 11:32:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:37.915 11:32:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:37.915 11:32:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:38.171 11:32:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:38.171 11:32:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:38.171 11:32:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:38.171 11:32:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:38.171 11:32:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:38.171 11:32:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:38.171 { 00:13:38.171 "auth": { 00:13:38.171 "dhgroup": "ffdhe2048", 00:13:38.171 "digest": "sha512", 00:13:38.171 "state": "completed" 00:13:38.171 }, 00:13:38.171 "cntlid": 107, 00:13:38.171 "listen_address": { 00:13:38.171 "adrfam": "IPv4", 00:13:38.171 "traddr": "10.0.0.2", 00:13:38.171 "trsvcid": "4420", 00:13:38.171 "trtype": "TCP" 00:13:38.171 }, 00:13:38.171 "peer_address": { 00:13:38.171 "adrfam": "IPv4", 00:13:38.171 "traddr": "10.0.0.1", 00:13:38.171 "trsvcid": "38188", 00:13:38.171 "trtype": "TCP" 00:13:38.171 }, 00:13:38.171 "qid": 0, 00:13:38.171 "state": "enabled", 00:13:38.171 "thread": "nvmf_tgt_poll_group_000" 00:13:38.171 } 00:13:38.171 ]' 00:13:38.171 11:32:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:38.171 11:32:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:38.171 11:32:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:38.428 11:32:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:13:38.428 11:32:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:38.428 11:32:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:38.428 11:32:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:38.428 11:32:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:38.686 11:32:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:891080d4-f96c-4735-b9e2-e3ce9892e421 --hostid 891080d4-f96c-4735-b9e2-e3ce9892e421 --dhchap-secret DHHC-1:01:MmIwMWM1NDY3MjE4MDgyYTg5ZDBjNzNhMmYwMzMyNzOMU4pH: --dhchap-ctrl-secret DHHC-1:02:YzllYzUxMzhlM2ZlMzcxZjNlM2M5MzRmOTgzYWEwZWI1YjliZmY5ZmNlNzNkY2Ix+7bkgA==: 00:13:39.618 11:32:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:39.618 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:39.618 11:32:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:891080d4-f96c-4735-b9e2-e3ce9892e421 00:13:39.618 11:32:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:39.618 11:32:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:39.618 11:32:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:39.618 11:32:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:39.618 11:32:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:13:39.618 11:32:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:13:39.876 11:32:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 2 00:13:39.876 11:32:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:39.876 11:32:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:13:39.876 11:32:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:13:39.876 11:32:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:13:39.876 11:32:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:39.876 11:32:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:891080d4-f96c-4735-b9e2-e3ce9892e421 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:39.876 11:32:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:39.876 11:32:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:39.876 11:32:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:39.876 11:32:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:891080d4-f96c-4735-b9e2-e3ce9892e421 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:39.876 11:32:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:891080d4-f96c-4735-b9e2-e3ce9892e421 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:40.440 00:13:40.440 11:32:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:40.440 11:32:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:40.440 11:32:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:40.697 11:32:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:40.697 11:32:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:40.697 11:32:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:40.697 11:32:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:40.955 11:32:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:40.955 11:32:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:40.955 { 00:13:40.955 "auth": { 00:13:40.955 "dhgroup": "ffdhe2048", 00:13:40.955 "digest": "sha512", 00:13:40.955 "state": "completed" 00:13:40.955 }, 00:13:40.955 "cntlid": 109, 00:13:40.955 "listen_address": { 00:13:40.955 "adrfam": "IPv4", 00:13:40.955 "traddr": "10.0.0.2", 00:13:40.955 "trsvcid": "4420", 00:13:40.955 "trtype": "TCP" 00:13:40.955 }, 00:13:40.955 "peer_address": { 00:13:40.956 "adrfam": "IPv4", 00:13:40.956 "traddr": "10.0.0.1", 00:13:40.956 "trsvcid": "38216", 00:13:40.956 "trtype": "TCP" 00:13:40.956 }, 00:13:40.956 "qid": 0, 00:13:40.956 "state": "enabled", 00:13:40.956 "thread": "nvmf_tgt_poll_group_000" 00:13:40.956 } 00:13:40.956 ]' 00:13:40.956 11:32:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:40.956 11:32:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:40.956 11:32:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:40.956 11:32:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:13:40.956 11:32:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:40.956 11:32:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:40.956 11:32:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:40.956 11:32:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:41.520 11:32:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:891080d4-f96c-4735-b9e2-e3ce9892e421 --hostid 891080d4-f96c-4735-b9e2-e3ce9892e421 --dhchap-secret DHHC-1:02:YmIzM2Q4ZGM4NmJlN2Y1YjVhMjhiZDJiYzAxOGQ4ODAyMzY1Zjc3NzBjZjAwYTVlTWVtCA==: --dhchap-ctrl-secret DHHC-1:01:ZjVjY2UxZDE0MjJiNThiOWE2NTQ0Zjg3MzljNGIwOGEwfG0l: 00:13:42.086 11:32:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:42.086 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:42.086 11:32:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:891080d4-f96c-4735-b9e2-e3ce9892e421 00:13:42.086 11:32:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:42.086 11:32:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:42.086 11:32:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:42.086 11:32:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:42.086 11:32:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:13:42.086 11:32:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:13:42.344 11:32:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 3 00:13:42.344 11:32:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:42.344 11:32:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:13:42.344 11:32:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:13:42.344 11:32:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:13:42.344 11:32:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:42.344 11:32:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:891080d4-f96c-4735-b9e2-e3ce9892e421 --dhchap-key key3 00:13:42.344 11:32:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:42.344 11:32:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:42.344 11:32:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:42.344 11:32:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:891080d4-f96c-4735-b9e2-e3ce9892e421 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:13:42.344 11:32:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:891080d4-f96c-4735-b9e2-e3ce9892e421 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:13:42.603 00:13:42.603 11:32:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:42.603 11:32:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:42.603 11:32:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:42.861 11:32:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:42.861 11:32:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:42.861 11:32:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:42.861 11:32:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:43.119 11:32:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:43.119 11:32:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:43.119 { 00:13:43.119 "auth": { 00:13:43.119 "dhgroup": "ffdhe2048", 00:13:43.119 "digest": "sha512", 00:13:43.119 "state": "completed" 00:13:43.119 }, 00:13:43.119 "cntlid": 111, 00:13:43.119 "listen_address": { 00:13:43.119 "adrfam": "IPv4", 00:13:43.119 "traddr": "10.0.0.2", 00:13:43.119 "trsvcid": "4420", 00:13:43.119 "trtype": "TCP" 00:13:43.119 }, 00:13:43.119 "peer_address": { 00:13:43.119 "adrfam": "IPv4", 00:13:43.119 "traddr": "10.0.0.1", 00:13:43.119 "trsvcid": "55860", 00:13:43.119 "trtype": "TCP" 00:13:43.119 }, 00:13:43.119 "qid": 0, 00:13:43.119 "state": "enabled", 00:13:43.119 "thread": "nvmf_tgt_poll_group_000" 00:13:43.119 } 00:13:43.119 ]' 00:13:43.119 11:32:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:43.119 11:32:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:43.119 11:32:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:43.119 11:32:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:13:43.119 11:32:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:43.119 11:32:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:43.119 11:32:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:43.119 11:32:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:43.377 11:32:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:891080d4-f96c-4735-b9e2-e3ce9892e421 --hostid 891080d4-f96c-4735-b9e2-e3ce9892e421 --dhchap-secret DHHC-1:03:MmM4Nzg3ODA1ZTJjY2Y2NDBlMDYxNGIwOGFlYzU4MWM3MTFmYTA3NmM5MTFiZDUzMWMzNjE2MmE1MWY0ODlmOHJ+gMM=: 00:13:44.308 11:32:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:44.308 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:44.308 11:32:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:891080d4-f96c-4735-b9e2-e3ce9892e421 00:13:44.308 11:32:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:44.308 11:32:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:44.308 11:32:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:44.309 11:32:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:13:44.309 11:32:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:44.309 11:32:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:13:44.309 11:32:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:13:44.309 11:32:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 0 00:13:44.309 11:32:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:44.309 11:32:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:13:44.309 11:32:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:13:44.309 11:32:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:13:44.309 11:32:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:44.309 11:32:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:891080d4-f96c-4735-b9e2-e3ce9892e421 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:44.309 11:32:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:44.309 11:32:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:44.309 11:32:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:44.309 11:32:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:891080d4-f96c-4735-b9e2-e3ce9892e421 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:44.309 11:32:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:891080d4-f96c-4735-b9e2-e3ce9892e421 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:44.875 00:13:44.875 11:32:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:44.875 11:32:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:44.875 11:32:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:45.133 11:32:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:45.133 11:32:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:45.133 11:32:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:45.133 11:32:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:45.133 11:32:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:45.133 11:32:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:45.133 { 00:13:45.133 "auth": { 00:13:45.133 "dhgroup": "ffdhe3072", 00:13:45.133 "digest": "sha512", 00:13:45.133 "state": "completed" 00:13:45.133 }, 00:13:45.133 "cntlid": 113, 00:13:45.133 "listen_address": { 00:13:45.133 "adrfam": "IPv4", 00:13:45.133 "traddr": "10.0.0.2", 00:13:45.133 "trsvcid": "4420", 00:13:45.133 "trtype": "TCP" 00:13:45.133 }, 00:13:45.133 "peer_address": { 00:13:45.133 "adrfam": "IPv4", 00:13:45.133 "traddr": "10.0.0.1", 00:13:45.133 "trsvcid": "55880", 00:13:45.133 "trtype": "TCP" 00:13:45.133 }, 00:13:45.133 "qid": 0, 00:13:45.133 "state": "enabled", 00:13:45.133 "thread": "nvmf_tgt_poll_group_000" 00:13:45.133 } 00:13:45.133 ]' 00:13:45.133 11:32:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:45.133 11:32:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:45.133 11:32:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:45.133 11:32:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:13:45.133 11:32:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:45.133 11:32:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:45.133 11:32:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:45.133 11:32:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:45.700 11:32:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:891080d4-f96c-4735-b9e2-e3ce9892e421 --hostid 891080d4-f96c-4735-b9e2-e3ce9892e421 --dhchap-secret DHHC-1:00:Mzk0NjEzMGY2NDlmM2VkY2EyMDk0MmE3ZWNlYjJhM2Q3OTg4MTYxNDZiMWE4ZDFmyEbbPA==: --dhchap-ctrl-secret DHHC-1:03:NjY4ZjdkMzE5NDgxMzg4MDNkNjkyMTZmODMyYjc0NjRkMDJhODY4NGFjNDU0ZDU3YTY5ZDdhNjBiYjZiZDBjYt73KVU=: 00:13:46.276 11:32:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:46.276 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:46.276 11:32:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:891080d4-f96c-4735-b9e2-e3ce9892e421 00:13:46.276 11:32:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:46.276 11:32:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:46.276 11:32:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:46.276 11:32:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:46.276 11:32:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:13:46.276 11:32:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:13:46.534 11:32:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 1 00:13:46.534 11:32:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:46.534 11:32:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:13:46.534 11:32:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:13:46.534 11:32:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:13:46.534 11:32:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:46.534 11:32:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:891080d4-f96c-4735-b9e2-e3ce9892e421 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:46.534 11:32:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:46.534 11:32:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:46.534 11:32:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:46.534 11:32:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:891080d4-f96c-4735-b9e2-e3ce9892e421 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:46.534 11:32:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:891080d4-f96c-4735-b9e2-e3ce9892e421 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:46.791 00:13:46.791 11:32:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:46.791 11:32:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:46.791 11:32:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:47.049 11:32:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:47.049 11:32:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:47.049 11:32:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:47.049 11:32:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:47.049 11:32:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:47.049 11:32:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:47.049 { 00:13:47.049 "auth": { 00:13:47.049 "dhgroup": "ffdhe3072", 00:13:47.049 "digest": "sha512", 00:13:47.049 "state": "completed" 00:13:47.049 }, 00:13:47.049 "cntlid": 115, 00:13:47.049 "listen_address": { 00:13:47.049 "adrfam": "IPv4", 00:13:47.049 "traddr": "10.0.0.2", 00:13:47.049 "trsvcid": "4420", 00:13:47.049 "trtype": "TCP" 00:13:47.049 }, 00:13:47.049 "peer_address": { 00:13:47.049 "adrfam": "IPv4", 00:13:47.049 "traddr": "10.0.0.1", 00:13:47.049 "trsvcid": "55896", 00:13:47.049 "trtype": "TCP" 00:13:47.049 }, 00:13:47.049 "qid": 0, 00:13:47.049 "state": "enabled", 00:13:47.049 "thread": "nvmf_tgt_poll_group_000" 00:13:47.049 } 00:13:47.049 ]' 00:13:47.049 11:32:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:47.049 11:32:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:47.306 11:32:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:47.306 11:32:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:13:47.306 11:32:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:47.306 11:32:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:47.306 11:32:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:47.306 11:32:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:47.564 11:32:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:891080d4-f96c-4735-b9e2-e3ce9892e421 --hostid 891080d4-f96c-4735-b9e2-e3ce9892e421 --dhchap-secret DHHC-1:01:MmIwMWM1NDY3MjE4MDgyYTg5ZDBjNzNhMmYwMzMyNzOMU4pH: --dhchap-ctrl-secret DHHC-1:02:YzllYzUxMzhlM2ZlMzcxZjNlM2M5MzRmOTgzYWEwZWI1YjliZmY5ZmNlNzNkY2Ix+7bkgA==: 00:13:48.128 11:32:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:48.128 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:48.128 11:32:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:891080d4-f96c-4735-b9e2-e3ce9892e421 00:13:48.128 11:32:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:48.128 11:32:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:48.451 11:32:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:48.451 11:32:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:48.451 11:32:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:13:48.451 11:32:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:13:48.710 11:32:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 2 00:13:48.710 11:32:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:48.710 11:32:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:13:48.710 11:32:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:13:48.710 11:32:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:13:48.710 11:32:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:48.710 11:32:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:891080d4-f96c-4735-b9e2-e3ce9892e421 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:48.710 11:32:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:48.710 11:32:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:48.710 11:32:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:48.710 11:32:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:891080d4-f96c-4735-b9e2-e3ce9892e421 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:48.710 11:32:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:891080d4-f96c-4735-b9e2-e3ce9892e421 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:48.967 00:13:48.967 11:32:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:48.967 11:32:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:48.967 11:32:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:49.225 11:32:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:49.225 11:32:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:49.225 11:32:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:49.225 11:32:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:49.225 11:32:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:49.225 11:32:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:49.225 { 00:13:49.225 "auth": { 00:13:49.225 "dhgroup": "ffdhe3072", 00:13:49.225 "digest": "sha512", 00:13:49.225 "state": "completed" 00:13:49.225 }, 00:13:49.225 "cntlid": 117, 00:13:49.225 "listen_address": { 00:13:49.225 "adrfam": "IPv4", 00:13:49.225 "traddr": "10.0.0.2", 00:13:49.225 "trsvcid": "4420", 00:13:49.225 "trtype": "TCP" 00:13:49.225 }, 00:13:49.225 "peer_address": { 00:13:49.225 "adrfam": "IPv4", 00:13:49.225 "traddr": "10.0.0.1", 00:13:49.225 "trsvcid": "55928", 00:13:49.225 "trtype": "TCP" 00:13:49.225 }, 00:13:49.225 "qid": 0, 00:13:49.225 "state": "enabled", 00:13:49.225 "thread": "nvmf_tgt_poll_group_000" 00:13:49.225 } 00:13:49.225 ]' 00:13:49.225 11:32:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:49.482 11:32:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:49.482 11:32:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:49.482 11:32:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:13:49.482 11:32:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:49.482 11:32:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:49.482 11:32:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:49.482 11:32:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:49.739 11:32:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:891080d4-f96c-4735-b9e2-e3ce9892e421 --hostid 891080d4-f96c-4735-b9e2-e3ce9892e421 --dhchap-secret DHHC-1:02:YmIzM2Q4ZGM4NmJlN2Y1YjVhMjhiZDJiYzAxOGQ4ODAyMzY1Zjc3NzBjZjAwYTVlTWVtCA==: --dhchap-ctrl-secret DHHC-1:01:ZjVjY2UxZDE0MjJiNThiOWE2NTQ0Zjg3MzljNGIwOGEwfG0l: 00:13:50.669 11:32:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:50.669 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:50.669 11:32:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:891080d4-f96c-4735-b9e2-e3ce9892e421 00:13:50.669 11:32:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:50.669 11:32:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:50.669 11:32:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:50.669 11:32:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:50.669 11:32:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:13:50.669 11:32:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:13:50.926 11:32:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 3 00:13:50.926 11:32:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:50.926 11:32:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:13:50.926 11:32:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:13:50.926 11:32:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:13:50.926 11:32:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:50.926 11:32:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:891080d4-f96c-4735-b9e2-e3ce9892e421 --dhchap-key key3 00:13:50.926 11:32:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:50.926 11:32:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:50.926 11:32:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:50.926 11:32:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:891080d4-f96c-4735-b9e2-e3ce9892e421 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:13:50.926 11:32:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:891080d4-f96c-4735-b9e2-e3ce9892e421 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:13:51.183 00:13:51.183 11:32:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:51.183 11:32:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:51.183 11:32:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:51.748 11:32:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:51.748 11:32:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:51.748 11:32:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:51.748 11:32:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:51.748 11:32:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:51.748 11:32:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:51.748 { 00:13:51.748 "auth": { 00:13:51.748 "dhgroup": "ffdhe3072", 00:13:51.748 "digest": "sha512", 00:13:51.748 "state": "completed" 00:13:51.748 }, 00:13:51.748 "cntlid": 119, 00:13:51.748 "listen_address": { 00:13:51.748 "adrfam": "IPv4", 00:13:51.748 "traddr": "10.0.0.2", 00:13:51.748 "trsvcid": "4420", 00:13:51.748 "trtype": "TCP" 00:13:51.748 }, 00:13:51.748 "peer_address": { 00:13:51.748 "adrfam": "IPv4", 00:13:51.748 "traddr": "10.0.0.1", 00:13:51.748 "trsvcid": "55962", 00:13:51.748 "trtype": "TCP" 00:13:51.748 }, 00:13:51.748 "qid": 0, 00:13:51.748 "state": "enabled", 00:13:51.748 "thread": "nvmf_tgt_poll_group_000" 00:13:51.748 } 00:13:51.748 ]' 00:13:51.748 11:32:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:51.748 11:32:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:51.748 11:32:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:51.748 11:32:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:13:51.748 11:32:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:51.748 11:32:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:51.748 11:32:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:51.748 11:32:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:52.005 11:32:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:891080d4-f96c-4735-b9e2-e3ce9892e421 --hostid 891080d4-f96c-4735-b9e2-e3ce9892e421 --dhchap-secret DHHC-1:03:MmM4Nzg3ODA1ZTJjY2Y2NDBlMDYxNGIwOGFlYzU4MWM3MTFmYTA3NmM5MTFiZDUzMWMzNjE2MmE1MWY0ODlmOHJ+gMM=: 00:13:52.938 11:32:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:52.938 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:52.938 11:32:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:891080d4-f96c-4735-b9e2-e3ce9892e421 00:13:52.938 11:32:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:52.938 11:32:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:52.938 11:32:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:52.938 11:32:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:13:52.938 11:32:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:52.938 11:32:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:13:52.938 11:32:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:13:53.199 11:32:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 0 00:13:53.199 11:32:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:53.199 11:32:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:13:53.199 11:32:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:13:53.199 11:32:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:13:53.199 11:32:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:53.199 11:32:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:891080d4-f96c-4735-b9e2-e3ce9892e421 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:53.199 11:32:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:53.199 11:32:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:53.199 11:32:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:53.199 11:32:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:891080d4-f96c-4735-b9e2-e3ce9892e421 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:53.199 11:32:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:891080d4-f96c-4735-b9e2-e3ce9892e421 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:53.767 00:13:53.767 11:32:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:53.767 11:32:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:53.767 11:32:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:54.026 11:32:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:54.026 11:32:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:54.026 11:32:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:54.026 11:32:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:54.026 11:32:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:54.026 11:32:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:54.026 { 00:13:54.026 "auth": { 00:13:54.026 "dhgroup": "ffdhe4096", 00:13:54.026 "digest": "sha512", 00:13:54.026 "state": "completed" 00:13:54.026 }, 00:13:54.026 "cntlid": 121, 00:13:54.026 "listen_address": { 00:13:54.026 "adrfam": "IPv4", 00:13:54.026 "traddr": "10.0.0.2", 00:13:54.026 "trsvcid": "4420", 00:13:54.026 "trtype": "TCP" 00:13:54.026 }, 00:13:54.026 "peer_address": { 00:13:54.026 "adrfam": "IPv4", 00:13:54.026 "traddr": "10.0.0.1", 00:13:54.026 "trsvcid": "51462", 00:13:54.026 "trtype": "TCP" 00:13:54.026 }, 00:13:54.026 "qid": 0, 00:13:54.026 "state": "enabled", 00:13:54.026 "thread": "nvmf_tgt_poll_group_000" 00:13:54.026 } 00:13:54.026 ]' 00:13:54.026 11:32:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:54.026 11:32:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:54.026 11:32:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:54.026 11:32:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:13:54.026 11:32:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:54.026 11:32:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:54.026 11:32:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:54.026 11:32:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:54.284 11:32:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:891080d4-f96c-4735-b9e2-e3ce9892e421 --hostid 891080d4-f96c-4735-b9e2-e3ce9892e421 --dhchap-secret DHHC-1:00:Mzk0NjEzMGY2NDlmM2VkY2EyMDk0MmE3ZWNlYjJhM2Q3OTg4MTYxNDZiMWE4ZDFmyEbbPA==: --dhchap-ctrl-secret DHHC-1:03:NjY4ZjdkMzE5NDgxMzg4MDNkNjkyMTZmODMyYjc0NjRkMDJhODY4NGFjNDU0ZDU3YTY5ZDdhNjBiYjZiZDBjYt73KVU=: 00:13:55.218 11:32:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:55.219 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:55.219 11:32:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:891080d4-f96c-4735-b9e2-e3ce9892e421 00:13:55.219 11:32:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:55.219 11:32:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:55.219 11:32:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:55.219 11:32:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:55.219 11:32:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:13:55.219 11:32:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:13:55.476 11:32:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 1 00:13:55.476 11:32:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:55.476 11:32:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:13:55.476 11:32:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:13:55.476 11:32:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:13:55.476 11:32:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:55.476 11:32:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:891080d4-f96c-4735-b9e2-e3ce9892e421 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:55.476 11:32:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:55.476 11:32:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:55.476 11:32:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:55.476 11:32:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:891080d4-f96c-4735-b9e2-e3ce9892e421 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:55.476 11:32:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:891080d4-f96c-4735-b9e2-e3ce9892e421 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:55.734 00:13:55.734 11:32:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:55.734 11:32:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:55.734 11:32:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:55.991 11:32:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:55.991 11:32:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:55.991 11:32:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:55.991 11:32:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:55.991 11:32:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:55.991 11:32:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:55.991 { 00:13:55.991 "auth": { 00:13:55.991 "dhgroup": "ffdhe4096", 00:13:55.991 "digest": "sha512", 00:13:55.991 "state": "completed" 00:13:55.991 }, 00:13:55.991 "cntlid": 123, 00:13:55.991 "listen_address": { 00:13:55.991 "adrfam": "IPv4", 00:13:55.991 "traddr": "10.0.0.2", 00:13:55.991 "trsvcid": "4420", 00:13:55.991 "trtype": "TCP" 00:13:55.991 }, 00:13:55.991 "peer_address": { 00:13:55.991 "adrfam": "IPv4", 00:13:55.991 "traddr": "10.0.0.1", 00:13:55.991 "trsvcid": "51472", 00:13:55.991 "trtype": "TCP" 00:13:55.991 }, 00:13:55.991 "qid": 0, 00:13:55.991 "state": "enabled", 00:13:55.991 "thread": "nvmf_tgt_poll_group_000" 00:13:55.991 } 00:13:55.991 ]' 00:13:55.991 11:32:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:56.248 11:32:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:56.248 11:32:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:56.248 11:32:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:13:56.248 11:32:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:56.248 11:32:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:56.248 11:32:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:56.248 11:32:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:56.504 11:32:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:891080d4-f96c-4735-b9e2-e3ce9892e421 --hostid 891080d4-f96c-4735-b9e2-e3ce9892e421 --dhchap-secret DHHC-1:01:MmIwMWM1NDY3MjE4MDgyYTg5ZDBjNzNhMmYwMzMyNzOMU4pH: --dhchap-ctrl-secret DHHC-1:02:YzllYzUxMzhlM2ZlMzcxZjNlM2M5MzRmOTgzYWEwZWI1YjliZmY5ZmNlNzNkY2Ix+7bkgA==: 00:13:57.068 11:32:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:57.325 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:57.325 11:32:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:891080d4-f96c-4735-b9e2-e3ce9892e421 00:13:57.325 11:32:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:57.325 11:32:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:57.325 11:32:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:57.325 11:32:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:57.325 11:32:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:13:57.325 11:32:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:13:57.582 11:32:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 2 00:13:57.582 11:32:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:57.582 11:32:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:13:57.582 11:32:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:13:57.582 11:32:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:13:57.582 11:32:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:57.582 11:32:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:891080d4-f96c-4735-b9e2-e3ce9892e421 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:57.582 11:32:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:57.582 11:32:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:57.582 11:32:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:57.582 11:32:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:891080d4-f96c-4735-b9e2-e3ce9892e421 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:57.582 11:32:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:891080d4-f96c-4735-b9e2-e3ce9892e421 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:57.838 00:13:57.838 11:32:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:57.838 11:32:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:57.838 11:32:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:58.094 11:32:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:58.094 11:32:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:58.094 11:32:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:58.094 11:32:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:58.094 11:32:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:58.094 11:32:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:58.094 { 00:13:58.094 "auth": { 00:13:58.094 "dhgroup": "ffdhe4096", 00:13:58.094 "digest": "sha512", 00:13:58.094 "state": "completed" 00:13:58.094 }, 00:13:58.094 "cntlid": 125, 00:13:58.094 "listen_address": { 00:13:58.094 "adrfam": "IPv4", 00:13:58.094 "traddr": "10.0.0.2", 00:13:58.094 "trsvcid": "4420", 00:13:58.094 "trtype": "TCP" 00:13:58.094 }, 00:13:58.094 "peer_address": { 00:13:58.094 "adrfam": "IPv4", 00:13:58.094 "traddr": "10.0.0.1", 00:13:58.094 "trsvcid": "51496", 00:13:58.094 "trtype": "TCP" 00:13:58.094 }, 00:13:58.094 "qid": 0, 00:13:58.094 "state": "enabled", 00:13:58.094 "thread": "nvmf_tgt_poll_group_000" 00:13:58.094 } 00:13:58.094 ]' 00:13:58.094 11:32:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:58.352 11:32:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:58.352 11:32:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:58.352 11:32:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:13:58.352 11:32:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:58.352 11:32:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:58.352 11:32:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:58.352 11:32:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:58.610 11:32:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:891080d4-f96c-4735-b9e2-e3ce9892e421 --hostid 891080d4-f96c-4735-b9e2-e3ce9892e421 --dhchap-secret DHHC-1:02:YmIzM2Q4ZGM4NmJlN2Y1YjVhMjhiZDJiYzAxOGQ4ODAyMzY1Zjc3NzBjZjAwYTVlTWVtCA==: --dhchap-ctrl-secret DHHC-1:01:ZjVjY2UxZDE0MjJiNThiOWE2NTQ0Zjg3MzljNGIwOGEwfG0l: 00:13:59.559 11:32:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:59.559 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:59.559 11:32:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:891080d4-f96c-4735-b9e2-e3ce9892e421 00:13:59.559 11:32:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:59.559 11:32:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:59.559 11:32:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:59.559 11:32:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:59.559 11:32:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:13:59.560 11:32:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:13:59.560 11:32:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 3 00:13:59.560 11:32:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:59.560 11:32:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:13:59.560 11:32:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:13:59.560 11:32:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:13:59.560 11:32:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:59.560 11:32:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:891080d4-f96c-4735-b9e2-e3ce9892e421 --dhchap-key key3 00:13:59.560 11:32:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:59.560 11:32:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:59.560 11:32:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:59.560 11:32:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:891080d4-f96c-4735-b9e2-e3ce9892e421 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:13:59.560 11:32:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:891080d4-f96c-4735-b9e2-e3ce9892e421 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:00.125 00:14:00.125 11:32:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:00.125 11:32:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:00.125 11:32:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:00.382 11:32:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:00.382 11:32:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:00.382 11:32:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:00.382 11:32:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:00.382 11:32:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:00.382 11:32:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:00.382 { 00:14:00.382 "auth": { 00:14:00.382 "dhgroup": "ffdhe4096", 00:14:00.382 "digest": "sha512", 00:14:00.382 "state": "completed" 00:14:00.382 }, 00:14:00.382 "cntlid": 127, 00:14:00.382 "listen_address": { 00:14:00.382 "adrfam": "IPv4", 00:14:00.382 "traddr": "10.0.0.2", 00:14:00.382 "trsvcid": "4420", 00:14:00.382 "trtype": "TCP" 00:14:00.382 }, 00:14:00.382 "peer_address": { 00:14:00.382 "adrfam": "IPv4", 00:14:00.382 "traddr": "10.0.0.1", 00:14:00.382 "trsvcid": "51528", 00:14:00.382 "trtype": "TCP" 00:14:00.382 }, 00:14:00.382 "qid": 0, 00:14:00.382 "state": "enabled", 00:14:00.382 "thread": "nvmf_tgt_poll_group_000" 00:14:00.382 } 00:14:00.382 ]' 00:14:00.382 11:32:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:00.382 11:32:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:00.382 11:32:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:00.382 11:32:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:14:00.382 11:32:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:00.382 11:32:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:00.382 11:32:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:00.382 11:32:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:00.947 11:32:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:891080d4-f96c-4735-b9e2-e3ce9892e421 --hostid 891080d4-f96c-4735-b9e2-e3ce9892e421 --dhchap-secret DHHC-1:03:MmM4Nzg3ODA1ZTJjY2Y2NDBlMDYxNGIwOGFlYzU4MWM3MTFmYTA3NmM5MTFiZDUzMWMzNjE2MmE1MWY0ODlmOHJ+gMM=: 00:14:01.512 11:32:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:01.512 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:01.512 11:32:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:891080d4-f96c-4735-b9e2-e3ce9892e421 00:14:01.512 11:32:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:01.512 11:32:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:01.512 11:32:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:01.512 11:32:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:14:01.512 11:32:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:01.512 11:32:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:14:01.512 11:32:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:14:01.770 11:32:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 0 00:14:01.770 11:32:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:01.770 11:32:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:14:01.770 11:32:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:14:01.770 11:32:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:14:01.770 11:32:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:01.770 11:32:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:891080d4-f96c-4735-b9e2-e3ce9892e421 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:01.770 11:32:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:01.770 11:32:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:01.770 11:32:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:01.770 11:32:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:891080d4-f96c-4735-b9e2-e3ce9892e421 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:01.770 11:32:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:891080d4-f96c-4735-b9e2-e3ce9892e421 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:02.334 00:14:02.334 11:32:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:02.334 11:32:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:02.334 11:32:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:02.593 11:32:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:02.593 11:32:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:02.593 11:32:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:02.593 11:32:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:02.593 11:32:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:02.593 11:32:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:02.593 { 00:14:02.593 "auth": { 00:14:02.593 "dhgroup": "ffdhe6144", 00:14:02.593 "digest": "sha512", 00:14:02.593 "state": "completed" 00:14:02.593 }, 00:14:02.593 "cntlid": 129, 00:14:02.593 "listen_address": { 00:14:02.593 "adrfam": "IPv4", 00:14:02.593 "traddr": "10.0.0.2", 00:14:02.593 "trsvcid": "4420", 00:14:02.593 "trtype": "TCP" 00:14:02.593 }, 00:14:02.593 "peer_address": { 00:14:02.593 "adrfam": "IPv4", 00:14:02.593 "traddr": "10.0.0.1", 00:14:02.593 "trsvcid": "33748", 00:14:02.593 "trtype": "TCP" 00:14:02.593 }, 00:14:02.593 "qid": 0, 00:14:02.593 "state": "enabled", 00:14:02.593 "thread": "nvmf_tgt_poll_group_000" 00:14:02.593 } 00:14:02.593 ]' 00:14:02.593 11:32:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:02.593 11:32:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:02.593 11:32:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:02.851 11:32:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:14:02.851 11:32:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:02.851 11:32:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:02.851 11:32:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:02.851 11:32:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:03.109 11:32:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:891080d4-f96c-4735-b9e2-e3ce9892e421 --hostid 891080d4-f96c-4735-b9e2-e3ce9892e421 --dhchap-secret DHHC-1:00:Mzk0NjEzMGY2NDlmM2VkY2EyMDk0MmE3ZWNlYjJhM2Q3OTg4MTYxNDZiMWE4ZDFmyEbbPA==: --dhchap-ctrl-secret DHHC-1:03:NjY4ZjdkMzE5NDgxMzg4MDNkNjkyMTZmODMyYjc0NjRkMDJhODY4NGFjNDU0ZDU3YTY5ZDdhNjBiYjZiZDBjYt73KVU=: 00:14:04.044 11:32:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:04.044 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:04.044 11:32:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:891080d4-f96c-4735-b9e2-e3ce9892e421 00:14:04.044 11:32:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:04.044 11:32:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:04.044 11:32:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:04.044 11:32:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:04.044 11:32:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:14:04.044 11:32:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:14:04.044 11:32:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 1 00:14:04.044 11:32:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:04.044 11:32:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:14:04.044 11:32:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:14:04.044 11:32:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:14:04.044 11:32:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:04.044 11:32:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:891080d4-f96c-4735-b9e2-e3ce9892e421 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:04.044 11:32:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:04.044 11:32:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:04.044 11:32:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:04.044 11:32:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:891080d4-f96c-4735-b9e2-e3ce9892e421 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:04.044 11:32:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:891080d4-f96c-4735-b9e2-e3ce9892e421 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:04.610 00:14:04.610 11:32:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:04.610 11:32:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:04.610 11:32:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:04.868 11:32:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:04.868 11:32:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:04.868 11:32:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:04.868 11:32:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:04.868 11:32:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:04.868 11:32:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:04.868 { 00:14:04.868 "auth": { 00:14:04.868 "dhgroup": "ffdhe6144", 00:14:04.868 "digest": "sha512", 00:14:04.868 "state": "completed" 00:14:04.868 }, 00:14:04.868 "cntlid": 131, 00:14:04.868 "listen_address": { 00:14:04.868 "adrfam": "IPv4", 00:14:04.868 "traddr": "10.0.0.2", 00:14:04.868 "trsvcid": "4420", 00:14:04.868 "trtype": "TCP" 00:14:04.868 }, 00:14:04.868 "peer_address": { 00:14:04.868 "adrfam": "IPv4", 00:14:04.868 "traddr": "10.0.0.1", 00:14:04.868 "trsvcid": "33772", 00:14:04.868 "trtype": "TCP" 00:14:04.868 }, 00:14:04.868 "qid": 0, 00:14:04.868 "state": "enabled", 00:14:04.868 "thread": "nvmf_tgt_poll_group_000" 00:14:04.868 } 00:14:04.868 ]' 00:14:04.868 11:32:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:04.868 11:32:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:04.868 11:32:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:05.126 11:32:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:14:05.126 11:32:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:05.126 11:32:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:05.126 11:32:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:05.126 11:32:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:05.384 11:32:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:891080d4-f96c-4735-b9e2-e3ce9892e421 --hostid 891080d4-f96c-4735-b9e2-e3ce9892e421 --dhchap-secret DHHC-1:01:MmIwMWM1NDY3MjE4MDgyYTg5ZDBjNzNhMmYwMzMyNzOMU4pH: --dhchap-ctrl-secret DHHC-1:02:YzllYzUxMzhlM2ZlMzcxZjNlM2M5MzRmOTgzYWEwZWI1YjliZmY5ZmNlNzNkY2Ix+7bkgA==: 00:14:05.949 11:32:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:05.949 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:05.949 11:32:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:891080d4-f96c-4735-b9e2-e3ce9892e421 00:14:05.949 11:32:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:05.949 11:32:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:05.949 11:32:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:05.949 11:32:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:05.949 11:32:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:14:05.949 11:32:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:14:06.207 11:32:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 2 00:14:06.207 11:32:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:06.207 11:32:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:14:06.207 11:32:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:14:06.207 11:32:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:14:06.207 11:32:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:06.207 11:32:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:891080d4-f96c-4735-b9e2-e3ce9892e421 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:06.207 11:32:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:06.207 11:32:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:06.207 11:32:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:06.207 11:32:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:891080d4-f96c-4735-b9e2-e3ce9892e421 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:06.207 11:32:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:891080d4-f96c-4735-b9e2-e3ce9892e421 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:06.772 00:14:06.772 11:32:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:06.772 11:32:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:06.772 11:32:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:07.030 11:32:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:07.030 11:32:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:07.030 11:32:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:07.030 11:32:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:07.030 11:32:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:07.030 11:32:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:07.030 { 00:14:07.030 "auth": { 00:14:07.030 "dhgroup": "ffdhe6144", 00:14:07.030 "digest": "sha512", 00:14:07.030 "state": "completed" 00:14:07.030 }, 00:14:07.030 "cntlid": 133, 00:14:07.030 "listen_address": { 00:14:07.030 "adrfam": "IPv4", 00:14:07.030 "traddr": "10.0.0.2", 00:14:07.030 "trsvcid": "4420", 00:14:07.030 "trtype": "TCP" 00:14:07.030 }, 00:14:07.030 "peer_address": { 00:14:07.030 "adrfam": "IPv4", 00:14:07.030 "traddr": "10.0.0.1", 00:14:07.030 "trsvcid": "33802", 00:14:07.030 "trtype": "TCP" 00:14:07.030 }, 00:14:07.030 "qid": 0, 00:14:07.030 "state": "enabled", 00:14:07.030 "thread": "nvmf_tgt_poll_group_000" 00:14:07.030 } 00:14:07.030 ]' 00:14:07.030 11:32:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:07.030 11:32:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:07.030 11:32:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:07.030 11:32:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:14:07.030 11:32:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:07.287 11:32:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:07.287 11:32:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:07.287 11:32:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:07.544 11:32:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:891080d4-f96c-4735-b9e2-e3ce9892e421 --hostid 891080d4-f96c-4735-b9e2-e3ce9892e421 --dhchap-secret DHHC-1:02:YmIzM2Q4ZGM4NmJlN2Y1YjVhMjhiZDJiYzAxOGQ4ODAyMzY1Zjc3NzBjZjAwYTVlTWVtCA==: --dhchap-ctrl-secret DHHC-1:01:ZjVjY2UxZDE0MjJiNThiOWE2NTQ0Zjg3MzljNGIwOGEwfG0l: 00:14:08.107 11:32:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:08.107 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:08.107 11:32:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:891080d4-f96c-4735-b9e2-e3ce9892e421 00:14:08.107 11:32:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:08.107 11:32:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:08.365 11:32:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:08.365 11:32:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:08.365 11:32:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:14:08.365 11:32:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:14:08.622 11:32:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 3 00:14:08.622 11:32:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:08.622 11:32:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:14:08.622 11:32:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:14:08.623 11:32:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:14:08.623 11:32:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:08.623 11:32:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:891080d4-f96c-4735-b9e2-e3ce9892e421 --dhchap-key key3 00:14:08.623 11:32:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:08.623 11:32:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:08.623 11:32:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:08.623 11:32:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:891080d4-f96c-4735-b9e2-e3ce9892e421 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:08.623 11:32:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:891080d4-f96c-4735-b9e2-e3ce9892e421 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:09.188 00:14:09.188 11:32:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:09.188 11:32:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:09.188 11:32:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:09.446 11:32:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:09.446 11:32:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:09.446 11:32:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:09.446 11:32:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:09.446 11:32:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:09.446 11:32:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:09.446 { 00:14:09.446 "auth": { 00:14:09.446 "dhgroup": "ffdhe6144", 00:14:09.446 "digest": "sha512", 00:14:09.446 "state": "completed" 00:14:09.446 }, 00:14:09.446 "cntlid": 135, 00:14:09.447 "listen_address": { 00:14:09.447 "adrfam": "IPv4", 00:14:09.447 "traddr": "10.0.0.2", 00:14:09.447 "trsvcid": "4420", 00:14:09.447 "trtype": "TCP" 00:14:09.447 }, 00:14:09.447 "peer_address": { 00:14:09.447 "adrfam": "IPv4", 00:14:09.447 "traddr": "10.0.0.1", 00:14:09.447 "trsvcid": "33832", 00:14:09.447 "trtype": "TCP" 00:14:09.447 }, 00:14:09.447 "qid": 0, 00:14:09.447 "state": "enabled", 00:14:09.447 "thread": "nvmf_tgt_poll_group_000" 00:14:09.447 } 00:14:09.447 ]' 00:14:09.447 11:32:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:09.447 11:32:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:09.447 11:32:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:09.447 11:32:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:14:09.447 11:32:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:09.447 11:32:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:09.447 11:32:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:09.447 11:32:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:09.704 11:32:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:891080d4-f96c-4735-b9e2-e3ce9892e421 --hostid 891080d4-f96c-4735-b9e2-e3ce9892e421 --dhchap-secret DHHC-1:03:MmM4Nzg3ODA1ZTJjY2Y2NDBlMDYxNGIwOGFlYzU4MWM3MTFmYTA3NmM5MTFiZDUzMWMzNjE2MmE1MWY0ODlmOHJ+gMM=: 00:14:10.638 11:32:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:10.638 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:10.638 11:32:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:891080d4-f96c-4735-b9e2-e3ce9892e421 00:14:10.638 11:32:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:10.638 11:32:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:10.638 11:32:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:10.638 11:32:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:14:10.638 11:32:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:10.638 11:32:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:14:10.638 11:32:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:14:10.895 11:32:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 0 00:14:10.895 11:32:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:10.895 11:32:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:14:10.895 11:32:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:14:10.895 11:32:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:14:10.895 11:32:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:10.895 11:32:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:891080d4-f96c-4735-b9e2-e3ce9892e421 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:10.895 11:32:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:10.895 11:32:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:10.895 11:32:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:10.895 11:32:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:891080d4-f96c-4735-b9e2-e3ce9892e421 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:10.895 11:32:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:891080d4-f96c-4735-b9e2-e3ce9892e421 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:11.459 00:14:11.459 11:32:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:11.459 11:32:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:11.459 11:32:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:11.716 11:32:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:11.716 11:32:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:11.716 11:32:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:11.716 11:32:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:11.716 11:32:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:11.716 11:32:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:11.716 { 00:14:11.716 "auth": { 00:14:11.716 "dhgroup": "ffdhe8192", 00:14:11.716 "digest": "sha512", 00:14:11.717 "state": "completed" 00:14:11.717 }, 00:14:11.717 "cntlid": 137, 00:14:11.717 "listen_address": { 00:14:11.717 "adrfam": "IPv4", 00:14:11.717 "traddr": "10.0.0.2", 00:14:11.717 "trsvcid": "4420", 00:14:11.717 "trtype": "TCP" 00:14:11.717 }, 00:14:11.717 "peer_address": { 00:14:11.717 "adrfam": "IPv4", 00:14:11.717 "traddr": "10.0.0.1", 00:14:11.717 "trsvcid": "33860", 00:14:11.717 "trtype": "TCP" 00:14:11.717 }, 00:14:11.717 "qid": 0, 00:14:11.717 "state": "enabled", 00:14:11.717 "thread": "nvmf_tgt_poll_group_000" 00:14:11.717 } 00:14:11.717 ]' 00:14:11.717 11:32:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:11.974 11:32:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:11.974 11:32:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:11.974 11:32:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:14:11.974 11:32:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:11.974 11:32:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:11.974 11:32:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:11.974 11:32:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:12.231 11:32:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:891080d4-f96c-4735-b9e2-e3ce9892e421 --hostid 891080d4-f96c-4735-b9e2-e3ce9892e421 --dhchap-secret DHHC-1:00:Mzk0NjEzMGY2NDlmM2VkY2EyMDk0MmE3ZWNlYjJhM2Q3OTg4MTYxNDZiMWE4ZDFmyEbbPA==: --dhchap-ctrl-secret DHHC-1:03:NjY4ZjdkMzE5NDgxMzg4MDNkNjkyMTZmODMyYjc0NjRkMDJhODY4NGFjNDU0ZDU3YTY5ZDdhNjBiYjZiZDBjYt73KVU=: 00:14:13.164 11:32:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:13.164 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:13.164 11:32:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:891080d4-f96c-4735-b9e2-e3ce9892e421 00:14:13.164 11:32:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:13.164 11:32:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:13.164 11:32:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:13.164 11:32:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:13.164 11:32:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:14:13.164 11:32:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:14:13.421 11:32:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 1 00:14:13.421 11:32:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:13.421 11:32:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:14:13.421 11:32:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:14:13.421 11:32:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:14:13.421 11:32:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:13.421 11:32:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:891080d4-f96c-4735-b9e2-e3ce9892e421 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:13.421 11:32:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:13.421 11:32:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:13.421 11:32:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:13.421 11:32:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:891080d4-f96c-4735-b9e2-e3ce9892e421 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:13.421 11:32:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:891080d4-f96c-4735-b9e2-e3ce9892e421 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:13.987 00:14:13.987 11:32:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:13.987 11:32:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:13.987 11:32:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:14.245 11:32:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:14.245 11:32:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:14.245 11:32:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:14.245 11:32:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:14.245 11:32:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:14.245 11:32:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:14.245 { 00:14:14.245 "auth": { 00:14:14.245 "dhgroup": "ffdhe8192", 00:14:14.245 "digest": "sha512", 00:14:14.245 "state": "completed" 00:14:14.245 }, 00:14:14.245 "cntlid": 139, 00:14:14.245 "listen_address": { 00:14:14.245 "adrfam": "IPv4", 00:14:14.245 "traddr": "10.0.0.2", 00:14:14.245 "trsvcid": "4420", 00:14:14.245 "trtype": "TCP" 00:14:14.245 }, 00:14:14.245 "peer_address": { 00:14:14.245 "adrfam": "IPv4", 00:14:14.245 "traddr": "10.0.0.1", 00:14:14.245 "trsvcid": "34304", 00:14:14.245 "trtype": "TCP" 00:14:14.245 }, 00:14:14.245 "qid": 0, 00:14:14.245 "state": "enabled", 00:14:14.245 "thread": "nvmf_tgt_poll_group_000" 00:14:14.245 } 00:14:14.245 ]' 00:14:14.245 11:32:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:14.245 11:32:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:14.245 11:32:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:14.505 11:32:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:14:14.505 11:32:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:14.505 11:32:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:14.505 11:32:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:14.505 11:32:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:14.764 11:32:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:891080d4-f96c-4735-b9e2-e3ce9892e421 --hostid 891080d4-f96c-4735-b9e2-e3ce9892e421 --dhchap-secret DHHC-1:01:MmIwMWM1NDY3MjE4MDgyYTg5ZDBjNzNhMmYwMzMyNzOMU4pH: --dhchap-ctrl-secret DHHC-1:02:YzllYzUxMzhlM2ZlMzcxZjNlM2M5MzRmOTgzYWEwZWI1YjliZmY5ZmNlNzNkY2Ix+7bkgA==: 00:14:15.332 11:32:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:15.332 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:15.332 11:32:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:891080d4-f96c-4735-b9e2-e3ce9892e421 00:14:15.332 11:32:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:15.332 11:32:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:15.332 11:32:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:15.332 11:32:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:15.332 11:32:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:14:15.332 11:32:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:14:15.590 11:32:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 2 00:14:15.590 11:32:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:15.590 11:32:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:14:15.590 11:32:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:14:15.590 11:32:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:14:15.590 11:32:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:15.590 11:32:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:891080d4-f96c-4735-b9e2-e3ce9892e421 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:15.590 11:32:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:15.590 11:32:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:15.590 11:32:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:15.590 11:32:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:891080d4-f96c-4735-b9e2-e3ce9892e421 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:15.590 11:32:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:891080d4-f96c-4735-b9e2-e3ce9892e421 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:16.523 00:14:16.523 11:32:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:16.523 11:32:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:16.523 11:32:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:16.781 11:32:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:16.781 11:32:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:16.781 11:32:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:16.781 11:32:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:16.781 11:32:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:16.781 11:32:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:16.781 { 00:14:16.781 "auth": { 00:14:16.781 "dhgroup": "ffdhe8192", 00:14:16.781 "digest": "sha512", 00:14:16.781 "state": "completed" 00:14:16.781 }, 00:14:16.781 "cntlid": 141, 00:14:16.781 "listen_address": { 00:14:16.781 "adrfam": "IPv4", 00:14:16.781 "traddr": "10.0.0.2", 00:14:16.781 "trsvcid": "4420", 00:14:16.781 "trtype": "TCP" 00:14:16.781 }, 00:14:16.781 "peer_address": { 00:14:16.781 "adrfam": "IPv4", 00:14:16.781 "traddr": "10.0.0.1", 00:14:16.781 "trsvcid": "34332", 00:14:16.781 "trtype": "TCP" 00:14:16.781 }, 00:14:16.781 "qid": 0, 00:14:16.781 "state": "enabled", 00:14:16.781 "thread": "nvmf_tgt_poll_group_000" 00:14:16.781 } 00:14:16.781 ]' 00:14:16.781 11:32:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:16.781 11:32:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:16.781 11:32:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:16.781 11:32:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:14:16.781 11:32:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:16.781 11:32:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:16.781 11:32:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:16.781 11:32:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:17.039 11:32:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:891080d4-f96c-4735-b9e2-e3ce9892e421 --hostid 891080d4-f96c-4735-b9e2-e3ce9892e421 --dhchap-secret DHHC-1:02:YmIzM2Q4ZGM4NmJlN2Y1YjVhMjhiZDJiYzAxOGQ4ODAyMzY1Zjc3NzBjZjAwYTVlTWVtCA==: --dhchap-ctrl-secret DHHC-1:01:ZjVjY2UxZDE0MjJiNThiOWE2NTQ0Zjg3MzljNGIwOGEwfG0l: 00:14:17.974 11:32:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:17.974 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:17.974 11:32:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:891080d4-f96c-4735-b9e2-e3ce9892e421 00:14:17.974 11:32:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:17.974 11:32:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:17.974 11:32:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:17.974 11:32:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:17.974 11:32:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:14:17.974 11:32:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:14:18.233 11:32:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 3 00:14:18.233 11:32:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:18.233 11:32:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:14:18.233 11:32:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:14:18.233 11:32:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:14:18.233 11:32:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:18.233 11:32:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:891080d4-f96c-4735-b9e2-e3ce9892e421 --dhchap-key key3 00:14:18.233 11:32:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:18.233 11:32:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:18.233 11:32:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:18.233 11:32:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:891080d4-f96c-4735-b9e2-e3ce9892e421 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:18.233 11:32:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:891080d4-f96c-4735-b9e2-e3ce9892e421 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:19.168 00:14:19.168 11:32:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:19.168 11:32:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:19.168 11:32:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:19.427 11:32:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:19.427 11:32:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:19.427 11:32:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:19.427 11:32:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:19.427 11:32:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:19.427 11:32:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:19.427 { 00:14:19.427 "auth": { 00:14:19.427 "dhgroup": "ffdhe8192", 00:14:19.427 "digest": "sha512", 00:14:19.427 "state": "completed" 00:14:19.427 }, 00:14:19.427 "cntlid": 143, 00:14:19.427 "listen_address": { 00:14:19.427 "adrfam": "IPv4", 00:14:19.427 "traddr": "10.0.0.2", 00:14:19.427 "trsvcid": "4420", 00:14:19.427 "trtype": "TCP" 00:14:19.427 }, 00:14:19.427 "peer_address": { 00:14:19.427 "adrfam": "IPv4", 00:14:19.427 "traddr": "10.0.0.1", 00:14:19.427 "trsvcid": "34350", 00:14:19.427 "trtype": "TCP" 00:14:19.427 }, 00:14:19.427 "qid": 0, 00:14:19.427 "state": "enabled", 00:14:19.427 "thread": "nvmf_tgt_poll_group_000" 00:14:19.427 } 00:14:19.427 ]' 00:14:19.427 11:32:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:19.427 11:32:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:19.427 11:32:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:19.427 11:32:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:14:19.427 11:32:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:19.427 11:32:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:19.427 11:32:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:19.427 11:32:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:19.685 11:32:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:891080d4-f96c-4735-b9e2-e3ce9892e421 --hostid 891080d4-f96c-4735-b9e2-e3ce9892e421 --dhchap-secret DHHC-1:03:MmM4Nzg3ODA1ZTJjY2Y2NDBlMDYxNGIwOGFlYzU4MWM3MTFmYTA3NmM5MTFiZDUzMWMzNjE2MmE1MWY0ODlmOHJ+gMM=: 00:14:20.620 11:32:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:20.620 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:20.620 11:32:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:891080d4-f96c-4735-b9e2-e3ce9892e421 00:14:20.620 11:32:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:20.620 11:32:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:20.620 11:32:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:20.620 11:32:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:14:20.620 11:32:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@103 -- # printf %s sha256,sha384,sha512 00:14:20.620 11:32:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:14:20.620 11:32:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@103 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:14:20.620 11:32:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:14:20.620 11:32:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:14:20.878 11:32:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@114 -- # connect_authenticate sha512 ffdhe8192 0 00:14:20.878 11:32:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:20.878 11:32:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:14:20.878 11:32:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:14:20.878 11:32:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:14:20.878 11:32:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:20.878 11:32:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:891080d4-f96c-4735-b9e2-e3ce9892e421 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:20.878 11:32:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:20.878 11:32:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:20.878 11:32:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:20.878 11:32:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:891080d4-f96c-4735-b9e2-e3ce9892e421 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:20.878 11:32:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:891080d4-f96c-4735-b9e2-e3ce9892e421 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:21.875 00:14:21.875 11:32:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:21.875 11:32:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:21.875 11:32:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:21.875 11:32:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:21.875 11:32:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:21.875 11:32:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:21.875 11:32:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:21.875 11:32:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:21.875 11:32:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:21.875 { 00:14:21.875 "auth": { 00:14:21.875 "dhgroup": "ffdhe8192", 00:14:21.875 "digest": "sha512", 00:14:21.875 "state": "completed" 00:14:21.875 }, 00:14:21.875 "cntlid": 145, 00:14:21.875 "listen_address": { 00:14:21.875 "adrfam": "IPv4", 00:14:21.875 "traddr": "10.0.0.2", 00:14:21.875 "trsvcid": "4420", 00:14:21.875 "trtype": "TCP" 00:14:21.875 }, 00:14:21.875 "peer_address": { 00:14:21.875 "adrfam": "IPv4", 00:14:21.875 "traddr": "10.0.0.1", 00:14:21.875 "trsvcid": "34382", 00:14:21.875 "trtype": "TCP" 00:14:21.875 }, 00:14:21.875 "qid": 0, 00:14:21.875 "state": "enabled", 00:14:21.875 "thread": "nvmf_tgt_poll_group_000" 00:14:21.875 } 00:14:21.875 ]' 00:14:21.875 11:32:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:21.875 11:32:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:22.134 11:32:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:22.134 11:32:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:14:22.134 11:32:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:22.134 11:32:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:22.134 11:32:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:22.134 11:32:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:22.393 11:32:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:891080d4-f96c-4735-b9e2-e3ce9892e421 --hostid 891080d4-f96c-4735-b9e2-e3ce9892e421 --dhchap-secret DHHC-1:00:Mzk0NjEzMGY2NDlmM2VkY2EyMDk0MmE3ZWNlYjJhM2Q3OTg4MTYxNDZiMWE4ZDFmyEbbPA==: --dhchap-ctrl-secret DHHC-1:03:NjY4ZjdkMzE5NDgxMzg4MDNkNjkyMTZmODMyYjc0NjRkMDJhODY4NGFjNDU0ZDU3YTY5ZDdhNjBiYjZiZDBjYt73KVU=: 00:14:23.327 11:33:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:23.327 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:23.327 11:33:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:891080d4-f96c-4735-b9e2-e3ce9892e421 00:14:23.327 11:33:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:23.327 11:33:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:23.328 11:33:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:23.328 11:33:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@117 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:891080d4-f96c-4735-b9e2-e3ce9892e421 --dhchap-key key1 00:14:23.328 11:33:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:23.328 11:33:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:23.328 11:33:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:23.328 11:33:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@118 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:891080d4-f96c-4735-b9e2-e3ce9892e421 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:14:23.328 11:33:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:14:23.328 11:33:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:891080d4-f96c-4735-b9e2-e3ce9892e421 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:14:23.328 11:33:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:14:23.328 11:33:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:23.328 11:33:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:14:23.328 11:33:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:23.328 11:33:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:891080d4-f96c-4735-b9e2-e3ce9892e421 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:14:23.328 11:33:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:891080d4-f96c-4735-b9e2-e3ce9892e421 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:14:23.895 2024/07/15 11:33:01 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 ddgst:%!s(bool=false) dhchap_key:key2 hdgst:%!s(bool=false) hostnqn:nqn.2014-08.org.nvmexpress:uuid:891080d4-f96c-4735-b9e2-e3ce9892e421 name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2024-03.io.spdk:cnode0 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:14:23.895 request: 00:14:23.895 { 00:14:23.895 "method": "bdev_nvme_attach_controller", 00:14:23.895 "params": { 00:14:23.895 "name": "nvme0", 00:14:23.895 "trtype": "tcp", 00:14:23.895 "traddr": "10.0.0.2", 00:14:23.895 "adrfam": "ipv4", 00:14:23.895 "trsvcid": "4420", 00:14:23.895 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:14:23.895 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:891080d4-f96c-4735-b9e2-e3ce9892e421", 00:14:23.895 "prchk_reftag": false, 00:14:23.895 "prchk_guard": false, 00:14:23.895 "hdgst": false, 00:14:23.895 "ddgst": false, 00:14:23.895 "dhchap_key": "key2" 00:14:23.895 } 00:14:23.895 } 00:14:23.895 Got JSON-RPC error response 00:14:23.895 GoRPCClient: error on JSON-RPC call 00:14:23.895 11:33:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:14:23.895 11:33:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:14:23.895 11:33:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:14:23.895 11:33:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:14:23.895 11:33:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@121 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:891080d4-f96c-4735-b9e2-e3ce9892e421 00:14:23.895 11:33:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:23.895 11:33:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:23.895 11:33:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:23.895 11:33:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@124 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:891080d4-f96c-4735-b9e2-e3ce9892e421 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:23.895 11:33:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:23.895 11:33:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:23.895 11:33:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:23.895 11:33:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@125 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:891080d4-f96c-4735-b9e2-e3ce9892e421 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:14:23.895 11:33:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:14:23.895 11:33:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:891080d4-f96c-4735-b9e2-e3ce9892e421 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:14:23.895 11:33:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:14:23.895 11:33:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:23.895 11:33:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:14:23.895 11:33:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:23.895 11:33:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:891080d4-f96c-4735-b9e2-e3ce9892e421 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:14:23.895 11:33:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:891080d4-f96c-4735-b9e2-e3ce9892e421 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:14:24.831 2024/07/15 11:33:01 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 ddgst:%!s(bool=false) dhchap_ctrlr_key:ckey2 dhchap_key:key1 hdgst:%!s(bool=false) hostnqn:nqn.2014-08.org.nvmexpress:uuid:891080d4-f96c-4735-b9e2-e3ce9892e421 name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2024-03.io.spdk:cnode0 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:14:24.831 request: 00:14:24.831 { 00:14:24.831 "method": "bdev_nvme_attach_controller", 00:14:24.831 "params": { 00:14:24.831 "name": "nvme0", 00:14:24.831 "trtype": "tcp", 00:14:24.831 "traddr": "10.0.0.2", 00:14:24.831 "adrfam": "ipv4", 00:14:24.831 "trsvcid": "4420", 00:14:24.831 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:14:24.831 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:891080d4-f96c-4735-b9e2-e3ce9892e421", 00:14:24.831 "prchk_reftag": false, 00:14:24.831 "prchk_guard": false, 00:14:24.831 "hdgst": false, 00:14:24.831 "ddgst": false, 00:14:24.831 "dhchap_key": "key1", 00:14:24.831 "dhchap_ctrlr_key": "ckey2" 00:14:24.831 } 00:14:24.831 } 00:14:24.831 Got JSON-RPC error response 00:14:24.831 GoRPCClient: error on JSON-RPC call 00:14:24.831 11:33:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:14:24.831 11:33:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:14:24.831 11:33:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:14:24.831 11:33:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:14:24.831 11:33:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@128 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:891080d4-f96c-4735-b9e2-e3ce9892e421 00:14:24.831 11:33:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:24.831 11:33:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:24.831 11:33:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:24.831 11:33:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@131 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:891080d4-f96c-4735-b9e2-e3ce9892e421 --dhchap-key key1 00:14:24.831 11:33:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:24.832 11:33:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:24.832 11:33:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:24.832 11:33:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@132 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:891080d4-f96c-4735-b9e2-e3ce9892e421 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:24.832 11:33:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:14:24.832 11:33:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:891080d4-f96c-4735-b9e2-e3ce9892e421 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:24.832 11:33:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:14:24.832 11:33:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:24.832 11:33:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:14:24.832 11:33:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:24.832 11:33:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:891080d4-f96c-4735-b9e2-e3ce9892e421 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:24.832 11:33:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:891080d4-f96c-4735-b9e2-e3ce9892e421 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:25.398 2024/07/15 11:33:02 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 ddgst:%!s(bool=false) dhchap_ctrlr_key:ckey1 dhchap_key:key1 hdgst:%!s(bool=false) hostnqn:nqn.2014-08.org.nvmexpress:uuid:891080d4-f96c-4735-b9e2-e3ce9892e421 name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2024-03.io.spdk:cnode0 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:14:25.398 request: 00:14:25.398 { 00:14:25.398 "method": "bdev_nvme_attach_controller", 00:14:25.398 "params": { 00:14:25.398 "name": "nvme0", 00:14:25.398 "trtype": "tcp", 00:14:25.398 "traddr": "10.0.0.2", 00:14:25.398 "adrfam": "ipv4", 00:14:25.398 "trsvcid": "4420", 00:14:25.398 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:14:25.398 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:891080d4-f96c-4735-b9e2-e3ce9892e421", 00:14:25.398 "prchk_reftag": false, 00:14:25.398 "prchk_guard": false, 00:14:25.398 "hdgst": false, 00:14:25.398 "ddgst": false, 00:14:25.398 "dhchap_key": "key1", 00:14:25.398 "dhchap_ctrlr_key": "ckey1" 00:14:25.398 } 00:14:25.398 } 00:14:25.398 Got JSON-RPC error response 00:14:25.398 GoRPCClient: error on JSON-RPC call 00:14:25.398 11:33:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:14:25.398 11:33:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:14:25.398 11:33:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:14:25.398 11:33:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:14:25.398 11:33:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@135 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:891080d4-f96c-4735-b9e2-e3ce9892e421 00:14:25.398 11:33:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:25.398 11:33:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:25.398 11:33:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:25.398 11:33:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@138 -- # killprocess 77893 00:14:25.398 11:33:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@948 -- # '[' -z 77893 ']' 00:14:25.398 11:33:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # kill -0 77893 00:14:25.398 11:33:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # uname 00:14:25.398 11:33:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:25.398 11:33:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 77893 00:14:25.398 11:33:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:14:25.398 killing process with pid 77893 00:14:25.398 11:33:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:14:25.398 11:33:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 77893' 00:14:25.398 11:33:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@967 -- # kill 77893 00:14:25.398 11:33:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@972 -- # wait 77893 00:14:25.398 11:33:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@139 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:14:25.398 11:33:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:25.398 11:33:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:25.398 11:33:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:25.398 11:33:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=82860 00:14:25.398 11:33:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:14:25.398 11:33:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 82860 00:14:25.398 11:33:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 82860 ']' 00:14:25.398 11:33:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:25.398 11:33:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:25.398 11:33:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:25.398 11:33:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:25.398 11:33:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:25.965 11:33:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:25.965 11:33:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:14:25.965 11:33:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:25.965 11:33:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:25.965 11:33:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:25.965 11:33:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:25.965 11:33:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@140 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:14:25.965 11:33:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@142 -- # waitforlisten 82860 00:14:25.965 11:33:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 82860 ']' 00:14:25.965 11:33:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:25.965 11:33:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:25.965 11:33:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:25.965 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:25.965 11:33:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:25.965 11:33:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:26.222 11:33:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:26.222 11:33:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:14:26.222 11:33:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@143 -- # rpc_cmd 00:14:26.222 11:33:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:26.222 11:33:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:26.222 11:33:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:26.222 11:33:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@153 -- # connect_authenticate sha512 ffdhe8192 3 00:14:26.222 11:33:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:26.222 11:33:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:14:26.222 11:33:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:14:26.222 11:33:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:14:26.222 11:33:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:26.222 11:33:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:891080d4-f96c-4735-b9e2-e3ce9892e421 --dhchap-key key3 00:14:26.222 11:33:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:26.222 11:33:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:26.222 11:33:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:26.222 11:33:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:891080d4-f96c-4735-b9e2-e3ce9892e421 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:26.222 11:33:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:891080d4-f96c-4735-b9e2-e3ce9892e421 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:27.155 00:14:27.155 11:33:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:27.155 11:33:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:27.155 11:33:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:27.155 11:33:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:27.155 11:33:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:27.155 11:33:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:27.155 11:33:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:27.155 11:33:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:27.155 11:33:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:27.155 { 00:14:27.155 "auth": { 00:14:27.155 "dhgroup": "ffdhe8192", 00:14:27.155 "digest": "sha512", 00:14:27.155 "state": "completed" 00:14:27.155 }, 00:14:27.155 "cntlid": 1, 00:14:27.155 "listen_address": { 00:14:27.155 "adrfam": "IPv4", 00:14:27.155 "traddr": "10.0.0.2", 00:14:27.155 "trsvcid": "4420", 00:14:27.155 "trtype": "TCP" 00:14:27.155 }, 00:14:27.155 "peer_address": { 00:14:27.155 "adrfam": "IPv4", 00:14:27.155 "traddr": "10.0.0.1", 00:14:27.155 "trsvcid": "56094", 00:14:27.155 "trtype": "TCP" 00:14:27.155 }, 00:14:27.155 "qid": 0, 00:14:27.155 "state": "enabled", 00:14:27.155 "thread": "nvmf_tgt_poll_group_000" 00:14:27.155 } 00:14:27.155 ]' 00:14:27.155 11:33:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:27.155 11:33:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:27.155 11:33:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:27.413 11:33:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:14:27.413 11:33:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:27.413 11:33:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:27.413 11:33:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:27.413 11:33:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:27.672 11:33:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:891080d4-f96c-4735-b9e2-e3ce9892e421 --hostid 891080d4-f96c-4735-b9e2-e3ce9892e421 --dhchap-secret DHHC-1:03:MmM4Nzg3ODA1ZTJjY2Y2NDBlMDYxNGIwOGFlYzU4MWM3MTFmYTA3NmM5MTFiZDUzMWMzNjE2MmE1MWY0ODlmOHJ+gMM=: 00:14:28.606 11:33:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:28.606 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:28.606 11:33:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:891080d4-f96c-4735-b9e2-e3ce9892e421 00:14:28.606 11:33:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:28.606 11:33:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:28.606 11:33:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:28.606 11:33:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:891080d4-f96c-4735-b9e2-e3ce9892e421 --dhchap-key key3 00:14:28.606 11:33:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:28.606 11:33:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:28.606 11:33:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:28.606 11:33:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@157 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:14:28.606 11:33:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:14:28.606 11:33:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@158 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:891080d4-f96c-4735-b9e2-e3ce9892e421 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:28.606 11:33:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:14:28.606 11:33:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:891080d4-f96c-4735-b9e2-e3ce9892e421 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:28.606 11:33:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:14:28.606 11:33:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:28.606 11:33:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:14:28.606 11:33:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:28.606 11:33:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:891080d4-f96c-4735-b9e2-e3ce9892e421 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:28.606 11:33:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:891080d4-f96c-4735-b9e2-e3ce9892e421 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:29.173 2024/07/15 11:33:06 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 ddgst:%!s(bool=false) dhchap_key:key3 hdgst:%!s(bool=false) hostnqn:nqn.2014-08.org.nvmexpress:uuid:891080d4-f96c-4735-b9e2-e3ce9892e421 name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2024-03.io.spdk:cnode0 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:14:29.173 request: 00:14:29.173 { 00:14:29.173 "method": "bdev_nvme_attach_controller", 00:14:29.173 "params": { 00:14:29.173 "name": "nvme0", 00:14:29.173 "trtype": "tcp", 00:14:29.173 "traddr": "10.0.0.2", 00:14:29.173 "adrfam": "ipv4", 00:14:29.173 "trsvcid": "4420", 00:14:29.173 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:14:29.173 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:891080d4-f96c-4735-b9e2-e3ce9892e421", 00:14:29.173 "prchk_reftag": false, 00:14:29.173 "prchk_guard": false, 00:14:29.173 "hdgst": false, 00:14:29.173 "ddgst": false, 00:14:29.173 "dhchap_key": "key3" 00:14:29.173 } 00:14:29.173 } 00:14:29.173 Got JSON-RPC error response 00:14:29.173 GoRPCClient: error on JSON-RPC call 00:14:29.173 11:33:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:14:29.173 11:33:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:14:29.173 11:33:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:14:29.173 11:33:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:14:29.173 11:33:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@163 -- # IFS=, 00:14:29.173 11:33:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@164 -- # printf %s sha256,sha384,sha512 00:14:29.173 11:33:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@163 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:14:29.173 11:33:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:14:29.173 11:33:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@169 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:891080d4-f96c-4735-b9e2-e3ce9892e421 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:29.173 11:33:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:14:29.173 11:33:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:891080d4-f96c-4735-b9e2-e3ce9892e421 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:29.173 11:33:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:14:29.173 11:33:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:29.173 11:33:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:14:29.173 11:33:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:29.173 11:33:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:891080d4-f96c-4735-b9e2-e3ce9892e421 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:29.173 11:33:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:891080d4-f96c-4735-b9e2-e3ce9892e421 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:29.432 2024/07/15 11:33:06 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 ddgst:%!s(bool=false) dhchap_key:key3 hdgst:%!s(bool=false) hostnqn:nqn.2014-08.org.nvmexpress:uuid:891080d4-f96c-4735-b9e2-e3ce9892e421 name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2024-03.io.spdk:cnode0 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:14:29.432 request: 00:14:29.432 { 00:14:29.432 "method": "bdev_nvme_attach_controller", 00:14:29.432 "params": { 00:14:29.432 "name": "nvme0", 00:14:29.432 "trtype": "tcp", 00:14:29.432 "traddr": "10.0.0.2", 00:14:29.432 "adrfam": "ipv4", 00:14:29.432 "trsvcid": "4420", 00:14:29.432 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:14:29.432 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:891080d4-f96c-4735-b9e2-e3ce9892e421", 00:14:29.432 "prchk_reftag": false, 00:14:29.432 "prchk_guard": false, 00:14:29.432 "hdgst": false, 00:14:29.432 "ddgst": false, 00:14:29.432 "dhchap_key": "key3" 00:14:29.432 } 00:14:29.432 } 00:14:29.432 Got JSON-RPC error response 00:14:29.432 GoRPCClient: error on JSON-RPC call 00:14:29.432 11:33:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:14:29.432 11:33:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:14:29.432 11:33:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:14:29.432 11:33:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:14:29.432 11:33:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:14:29.432 11:33:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@176 -- # printf %s sha256,sha384,sha512 00:14:29.432 11:33:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:14:29.432 11:33:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@176 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:14:29.432 11:33:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@175 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:14:29.432 11:33:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:14:29.999 11:33:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@186 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:891080d4-f96c-4735-b9e2-e3ce9892e421 00:14:29.999 11:33:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:29.999 11:33:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:29.999 11:33:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:29.999 11:33:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@187 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:891080d4-f96c-4735-b9e2-e3ce9892e421 00:14:29.999 11:33:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:29.999 11:33:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:29.999 11:33:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:29.999 11:33:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@188 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:891080d4-f96c-4735-b9e2-e3ce9892e421 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:14:29.999 11:33:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:14:29.999 11:33:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:891080d4-f96c-4735-b9e2-e3ce9892e421 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:14:29.999 11:33:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:14:29.999 11:33:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:29.999 11:33:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:14:29.999 11:33:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:29.999 11:33:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:891080d4-f96c-4735-b9e2-e3ce9892e421 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:14:29.999 11:33:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:891080d4-f96c-4735-b9e2-e3ce9892e421 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:14:29.999 2024/07/15 11:33:07 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 ddgst:%!s(bool=false) dhchap_ctrlr_key:key1 dhchap_key:key0 hdgst:%!s(bool=false) hostnqn:nqn.2014-08.org.nvmexpress:uuid:891080d4-f96c-4735-b9e2-e3ce9892e421 name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2024-03.io.spdk:cnode0 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:14:29.999 request: 00:14:29.999 { 00:14:29.999 "method": "bdev_nvme_attach_controller", 00:14:29.999 "params": { 00:14:29.999 "name": "nvme0", 00:14:29.999 "trtype": "tcp", 00:14:29.999 "traddr": "10.0.0.2", 00:14:29.999 "adrfam": "ipv4", 00:14:29.999 "trsvcid": "4420", 00:14:29.999 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:14:29.999 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:891080d4-f96c-4735-b9e2-e3ce9892e421", 00:14:29.999 "prchk_reftag": false, 00:14:29.999 "prchk_guard": false, 00:14:29.999 "hdgst": false, 00:14:29.999 "ddgst": false, 00:14:29.999 "dhchap_key": "key0", 00:14:30.000 "dhchap_ctrlr_key": "key1" 00:14:30.000 } 00:14:30.000 } 00:14:30.000 Got JSON-RPC error response 00:14:30.000 GoRPCClient: error on JSON-RPC call 00:14:30.000 11:33:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:14:30.000 11:33:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:14:30.000 11:33:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:14:30.000 11:33:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:14:30.000 11:33:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@192 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:891080d4-f96c-4735-b9e2-e3ce9892e421 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:14:30.000 11:33:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:891080d4-f96c-4735-b9e2-e3ce9892e421 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:14:30.565 00:14:30.565 11:33:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@195 -- # jq -r '.[].name' 00:14:30.565 11:33:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@195 -- # hostrpc bdev_nvme_get_controllers 00:14:30.565 11:33:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:30.565 11:33:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@195 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:30.565 11:33:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@196 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:30.565 11:33:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:31.133 11:33:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@198 -- # trap - SIGINT SIGTERM EXIT 00:14:31.133 11:33:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@199 -- # cleanup 00:14:31.133 11:33:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 77924 00:14:31.133 11:33:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@948 -- # '[' -z 77924 ']' 00:14:31.133 11:33:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # kill -0 77924 00:14:31.133 11:33:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # uname 00:14:31.133 11:33:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:31.133 11:33:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 77924 00:14:31.133 11:33:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:14:31.133 11:33:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:14:31.133 killing process with pid 77924 00:14:31.133 11:33:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 77924' 00:14:31.133 11:33:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@967 -- # kill 77924 00:14:31.133 11:33:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@972 -- # wait 77924 00:14:31.133 11:33:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:14:31.133 11:33:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:31.133 11:33:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@117 -- # sync 00:14:31.392 11:33:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:31.392 11:33:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@120 -- # set +e 00:14:31.392 11:33:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:31.392 11:33:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:31.392 rmmod nvme_tcp 00:14:31.392 rmmod nvme_fabrics 00:14:31.392 rmmod nvme_keyring 00:14:31.392 11:33:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:31.392 11:33:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@124 -- # set -e 00:14:31.392 11:33:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@125 -- # return 0 00:14:31.392 11:33:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@489 -- # '[' -n 82860 ']' 00:14:31.392 11:33:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@490 -- # killprocess 82860 00:14:31.392 11:33:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@948 -- # '[' -z 82860 ']' 00:14:31.392 11:33:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # kill -0 82860 00:14:31.392 11:33:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # uname 00:14:31.392 11:33:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:31.392 11:33:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 82860 00:14:31.392 11:33:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:14:31.392 killing process with pid 82860 00:14:31.392 11:33:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:14:31.392 11:33:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 82860' 00:14:31.392 11:33:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@967 -- # kill 82860 00:14:31.392 11:33:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@972 -- # wait 82860 00:14:31.650 11:33:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:31.650 11:33:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:31.650 11:33:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:31.650 11:33:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:31.650 11:33:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:31.650 11:33:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:31.650 11:33:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:31.650 11:33:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:31.650 11:33:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:14:31.650 11:33:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.cAm /tmp/spdk.key-sha256.IAG /tmp/spdk.key-sha384.UI6 /tmp/spdk.key-sha512.h6N /tmp/spdk.key-sha512.8w4 /tmp/spdk.key-sha384.Xk5 /tmp/spdk.key-sha256.bzN '' /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log /home/vagrant/spdk_repo/spdk/../output/nvmf-auth.log 00:14:31.650 00:14:31.650 real 2m59.039s 00:14:31.650 user 7m16.744s 00:14:31.650 sys 0m21.623s 00:14:31.650 11:33:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:31.650 11:33:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:31.650 ************************************ 00:14:31.650 END TEST nvmf_auth_target 00:14:31.650 ************************************ 00:14:31.650 11:33:08 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:14:31.650 11:33:08 nvmf_tcp -- nvmf/nvmf.sh@59 -- # '[' tcp = tcp ']' 00:14:31.650 11:33:08 nvmf_tcp -- nvmf/nvmf.sh@60 -- # run_test nvmf_bdevio_no_huge /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:14:31.650 11:33:08 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:14:31.650 11:33:08 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:31.650 11:33:08 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:31.650 ************************************ 00:14:31.650 START TEST nvmf_bdevio_no_huge 00:14:31.650 ************************************ 00:14:31.650 11:33:08 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:14:31.650 * Looking for test storage... 00:14:31.650 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:14:31.650 11:33:09 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:31.651 11:33:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:14:31.651 11:33:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:31.651 11:33:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:31.651 11:33:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:31.651 11:33:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:31.651 11:33:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:31.651 11:33:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:31.651 11:33:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:31.651 11:33:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:31.651 11:33:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:31.651 11:33:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:31.651 11:33:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:891080d4-f96c-4735-b9e2-e3ce9892e421 00:14:31.651 11:33:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=891080d4-f96c-4735-b9e2-e3ce9892e421 00:14:31.651 11:33:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:31.651 11:33:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:31.651 11:33:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:31.651 11:33:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:31.651 11:33:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:31.651 11:33:09 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:31.651 11:33:09 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:31.651 11:33:09 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:31.651 11:33:09 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:31.651 11:33:09 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:31.651 11:33:09 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:31.651 11:33:09 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:14:31.651 11:33:09 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:31.651 11:33:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@47 -- # : 0 00:14:31.651 11:33:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:31.651 11:33:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:31.651 11:33:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:31.651 11:33:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:31.651 11:33:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:31.651 11:33:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:31.651 11:33:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:31.651 11:33:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:31.651 11:33:09 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:31.651 11:33:09 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:31.651 11:33:09 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:14:31.651 11:33:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:31.651 11:33:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:31.651 11:33:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:31.651 11:33:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:31.651 11:33:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:31.651 11:33:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:31.651 11:33:09 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:31.651 11:33:09 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:31.651 11:33:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:14:31.651 11:33:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:14:31.651 11:33:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:14:31.651 11:33:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:14:31.651 11:33:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:14:31.651 11:33:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@432 -- # nvmf_veth_init 00:14:31.651 11:33:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:31.651 11:33:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:31.651 11:33:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:14:31.651 11:33:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:14:31.651 11:33:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:14:31.651 11:33:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:14:31.651 11:33:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:14:31.651 11:33:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:31.651 11:33:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:14:31.651 11:33:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:14:31.651 11:33:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:14:31.651 11:33:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:14:31.651 11:33:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:14:31.651 11:33:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:14:31.651 Cannot find device "nvmf_tgt_br" 00:14:31.651 11:33:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@155 -- # true 00:14:31.651 11:33:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:14:31.651 Cannot find device "nvmf_tgt_br2" 00:14:31.651 11:33:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@156 -- # true 00:14:31.651 11:33:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:14:31.651 11:33:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:14:31.651 Cannot find device "nvmf_tgt_br" 00:14:31.651 11:33:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@158 -- # true 00:14:31.651 11:33:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:14:31.909 Cannot find device "nvmf_tgt_br2" 00:14:31.909 11:33:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@159 -- # true 00:14:31.909 11:33:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:14:31.909 11:33:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:14:31.909 11:33:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:31.909 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:31.909 11:33:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@162 -- # true 00:14:31.909 11:33:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:31.909 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:31.909 11:33:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@163 -- # true 00:14:31.909 11:33:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:14:31.909 11:33:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:14:31.909 11:33:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:14:31.910 11:33:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:14:31.910 11:33:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:14:31.910 11:33:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:14:31.910 11:33:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:14:31.910 11:33:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:14:31.910 11:33:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:14:31.910 11:33:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:14:31.910 11:33:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:14:31.910 11:33:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:14:31.910 11:33:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:14:31.910 11:33:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:14:31.910 11:33:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:14:31.910 11:33:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:14:31.910 11:33:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:14:31.910 11:33:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:14:31.910 11:33:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:14:31.910 11:33:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:14:31.910 11:33:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:14:31.910 11:33:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:14:31.910 11:33:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:14:31.910 11:33:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:14:31.910 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:31.910 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.085 ms 00:14:31.910 00:14:31.910 --- 10.0.0.2 ping statistics --- 00:14:31.910 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:31.910 rtt min/avg/max/mdev = 0.085/0.085/0.085/0.000 ms 00:14:31.910 11:33:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:14:31.910 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:14:31.910 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.044 ms 00:14:31.910 00:14:31.910 --- 10.0.0.3 ping statistics --- 00:14:31.910 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:31.910 rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms 00:14:31.910 11:33:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:14:31.910 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:31.910 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.039 ms 00:14:31.910 00:14:31.910 --- 10.0.0.1 ping statistics --- 00:14:31.910 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:31.910 rtt min/avg/max/mdev = 0.039/0.039/0.039/0.000 ms 00:14:31.910 11:33:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:31.910 11:33:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@433 -- # return 0 00:14:31.910 11:33:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:31.910 11:33:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:31.910 11:33:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:31.910 11:33:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:31.910 11:33:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:31.910 11:33:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:32.191 11:33:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:32.191 11:33:09 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:14:32.191 11:33:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:32.191 11:33:09 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:32.191 11:33:09 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:14:32.191 11:33:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@481 -- # nvmfpid=83244 00:14:32.191 11:33:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:14:32.191 11:33:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # waitforlisten 83244 00:14:32.191 11:33:09 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@829 -- # '[' -z 83244 ']' 00:14:32.191 11:33:09 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:32.191 11:33:09 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:32.191 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:32.191 11:33:09 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:32.191 11:33:09 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:32.191 11:33:09 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:14:32.191 [2024-07-15 11:33:09.455502] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:14:32.191 [2024-07-15 11:33:09.455610] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:14:32.191 [2024-07-15 11:33:09.597599] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:32.460 [2024-07-15 11:33:09.707204] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:32.460 [2024-07-15 11:33:09.707274] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:32.460 [2024-07-15 11:33:09.707286] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:32.460 [2024-07-15 11:33:09.707294] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:32.460 [2024-07-15 11:33:09.707302] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:32.460 [2024-07-15 11:33:09.708024] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:14:32.460 [2024-07-15 11:33:09.708134] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:14:32.460 [2024-07-15 11:33:09.708212] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:14:32.460 [2024-07-15 11:33:09.708691] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:14:33.026 11:33:10 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:33.026 11:33:10 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@862 -- # return 0 00:14:33.026 11:33:10 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:33.026 11:33:10 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:33.026 11:33:10 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:14:33.026 11:33:10 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:33.026 11:33:10 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:33.026 11:33:10 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:33.026 11:33:10 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:14:33.026 [2024-07-15 11:33:10.474250] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:33.026 11:33:10 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:33.027 11:33:10 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:14:33.027 11:33:10 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:33.027 11:33:10 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:14:33.027 Malloc0 00:14:33.027 11:33:10 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:33.027 11:33:10 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:14:33.027 11:33:10 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:33.027 11:33:10 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:14:33.027 11:33:10 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:33.027 11:33:10 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:14:33.027 11:33:10 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:33.027 11:33:10 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:14:33.285 11:33:10 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:33.285 11:33:10 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:33.285 11:33:10 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:33.285 11:33:10 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:14:33.285 [2024-07-15 11:33:10.511696] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:33.285 11:33:10 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:33.285 11:33:10 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:14:33.285 11:33:10 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:14:33.285 11:33:10 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # config=() 00:14:33.285 11:33:10 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # local subsystem config 00:14:33.285 11:33:10 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:14:33.285 11:33:10 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:14:33.285 { 00:14:33.285 "params": { 00:14:33.285 "name": "Nvme$subsystem", 00:14:33.285 "trtype": "$TEST_TRANSPORT", 00:14:33.285 "traddr": "$NVMF_FIRST_TARGET_IP", 00:14:33.285 "adrfam": "ipv4", 00:14:33.285 "trsvcid": "$NVMF_PORT", 00:14:33.285 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:14:33.285 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:14:33.285 "hdgst": ${hdgst:-false}, 00:14:33.285 "ddgst": ${ddgst:-false} 00:14:33.285 }, 00:14:33.285 "method": "bdev_nvme_attach_controller" 00:14:33.285 } 00:14:33.285 EOF 00:14:33.285 )") 00:14:33.285 11:33:10 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # cat 00:14:33.285 11:33:10 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@556 -- # jq . 00:14:33.285 11:33:10 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@557 -- # IFS=, 00:14:33.285 11:33:10 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:14:33.285 "params": { 00:14:33.285 "name": "Nvme1", 00:14:33.285 "trtype": "tcp", 00:14:33.285 "traddr": "10.0.0.2", 00:14:33.285 "adrfam": "ipv4", 00:14:33.285 "trsvcid": "4420", 00:14:33.285 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:33.285 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:33.285 "hdgst": false, 00:14:33.285 "ddgst": false 00:14:33.285 }, 00:14:33.285 "method": "bdev_nvme_attach_controller" 00:14:33.285 }' 00:14:33.285 [2024-07-15 11:33:10.581283] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:14:33.285 [2024-07-15 11:33:10.581398] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid83298 ] 00:14:33.285 [2024-07-15 11:33:10.728008] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:14:33.543 [2024-07-15 11:33:10.897528] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:33.543 [2024-07-15 11:33:10.897626] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:14:33.543 [2024-07-15 11:33:10.898040] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:33.801 I/O targets: 00:14:33.801 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:14:33.801 00:14:33.801 00:14:33.801 CUnit - A unit testing framework for C - Version 2.1-3 00:14:33.801 http://cunit.sourceforge.net/ 00:14:33.801 00:14:33.801 00:14:33.801 Suite: bdevio tests on: Nvme1n1 00:14:33.801 Test: blockdev write read block ...passed 00:14:33.801 Test: blockdev write zeroes read block ...passed 00:14:33.801 Test: blockdev write zeroes read no split ...passed 00:14:33.801 Test: blockdev write zeroes read split ...passed 00:14:33.801 Test: blockdev write zeroes read split partial ...passed 00:14:33.801 Test: blockdev reset ...[2024-07-15 11:33:11.214524] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:14:33.801 [2024-07-15 11:33:11.214963] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b6460 (9): Bad file descriptor 00:14:33.801 [2024-07-15 11:33:11.235417] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:14:33.801 passed 00:14:33.801 Test: blockdev write read 8 blocks ...passed 00:14:33.801 Test: blockdev write read size > 128k ...passed 00:14:33.801 Test: blockdev write read invalid size ...passed 00:14:34.059 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:14:34.059 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:14:34.059 Test: blockdev write read max offset ...passed 00:14:34.059 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:14:34.059 Test: blockdev writev readv 8 blocks ...passed 00:14:34.059 Test: blockdev writev readv 30 x 1block ...passed 00:14:34.059 Test: blockdev writev readv block ...passed 00:14:34.059 Test: blockdev writev readv size > 128k ...passed 00:14:34.059 Test: blockdev writev readv size > 128k in two iovs ...passed 00:14:34.059 Test: blockdev comparev and writev ...[2024-07-15 11:33:11.408157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:34.059 [2024-07-15 11:33:11.408213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:14:34.059 [2024-07-15 11:33:11.408234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:34.059 [2024-07-15 11:33:11.408245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:14:34.059 [2024-07-15 11:33:11.408814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:34.059 [2024-07-15 11:33:11.408847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:14:34.059 [2024-07-15 11:33:11.408866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:34.059 [2024-07-15 11:33:11.408877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:14:34.059 [2024-07-15 11:33:11.409349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:34.059 [2024-07-15 11:33:11.409379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:14:34.059 [2024-07-15 11:33:11.409398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:34.059 [2024-07-15 11:33:11.409408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:14:34.059 [2024-07-15 11:33:11.409788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:34.059 [2024-07-15 11:33:11.409818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:14:34.059 [2024-07-15 11:33:11.409836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:34.059 [2024-07-15 11:33:11.409857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:14:34.059 passed 00:14:34.059 Test: blockdev nvme passthru rw ...passed 00:14:34.059 Test: blockdev nvme passthru vendor specific ...[2024-07-15 11:33:11.494138] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:14:34.059 [2024-07-15 11:33:11.494196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:14:34.059 [2024-07-15 11:33:11.494329] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:14:34.059 [2024-07-15 11:33:11.494355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:14:34.059 [2024-07-15 11:33:11.494465] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:14:34.059 [2024-07-15 11:33:11.494483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:14:34.059 [2024-07-15 11:33:11.494604] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:14:34.059 [2024-07-15 11:33:11.494632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:14:34.059 passed 00:14:34.059 Test: blockdev nvme admin passthru ...passed 00:14:34.317 Test: blockdev copy ...passed 00:14:34.317 00:14:34.317 Run Summary: Type Total Ran Passed Failed Inactive 00:14:34.317 suites 1 1 n/a 0 0 00:14:34.317 tests 23 23 23 0 0 00:14:34.317 asserts 152 152 152 0 n/a 00:14:34.317 00:14:34.317 Elapsed time = 0.920 seconds 00:14:34.575 11:33:11 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:34.575 11:33:11 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:34.575 11:33:11 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:14:34.575 11:33:11 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:34.575 11:33:11 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:14:34.575 11:33:11 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:14:34.575 11:33:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:34.575 11:33:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@117 -- # sync 00:14:34.575 11:33:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:34.575 11:33:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@120 -- # set +e 00:14:34.575 11:33:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:34.575 11:33:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:34.575 rmmod nvme_tcp 00:14:34.575 rmmod nvme_fabrics 00:14:34.575 rmmod nvme_keyring 00:14:34.575 11:33:12 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:34.575 11:33:12 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set -e 00:14:34.575 11:33:12 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # return 0 00:14:34.575 11:33:12 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@489 -- # '[' -n 83244 ']' 00:14:34.575 11:33:12 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@490 -- # killprocess 83244 00:14:34.575 11:33:12 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@948 -- # '[' -z 83244 ']' 00:14:34.575 11:33:12 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@952 -- # kill -0 83244 00:14:34.832 11:33:12 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@953 -- # uname 00:14:34.832 11:33:12 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:34.832 11:33:12 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 83244 00:14:34.832 11:33:12 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # process_name=reactor_3 00:14:34.832 11:33:12 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@958 -- # '[' reactor_3 = sudo ']' 00:14:34.832 killing process with pid 83244 00:14:34.832 11:33:12 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@966 -- # echo 'killing process with pid 83244' 00:14:34.832 11:33:12 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@967 -- # kill 83244 00:14:34.832 11:33:12 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@972 -- # wait 83244 00:14:35.089 11:33:12 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:35.089 11:33:12 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:35.089 11:33:12 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:35.089 11:33:12 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:35.089 11:33:12 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:35.089 11:33:12 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:35.089 11:33:12 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:35.089 11:33:12 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:35.089 11:33:12 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:14:35.089 ************************************ 00:14:35.089 END TEST nvmf_bdevio_no_huge 00:14:35.089 ************************************ 00:14:35.089 00:14:35.089 real 0m3.543s 00:14:35.089 user 0m12.959s 00:14:35.089 sys 0m1.314s 00:14:35.089 11:33:12 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:35.089 11:33:12 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:14:35.089 11:33:12 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:14:35.089 11:33:12 nvmf_tcp -- nvmf/nvmf.sh@61 -- # run_test nvmf_tls /home/vagrant/spdk_repo/spdk/test/nvmf/target/tls.sh --transport=tcp 00:14:35.089 11:33:12 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:14:35.089 11:33:12 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:35.090 11:33:12 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:35.090 ************************************ 00:14:35.090 START TEST nvmf_tls 00:14:35.090 ************************************ 00:14:35.090 11:33:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/tls.sh --transport=tcp 00:14:35.348 * Looking for test storage... 00:14:35.348 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:14:35.348 11:33:12 nvmf_tcp.nvmf_tls -- target/tls.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:35.348 11:33:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:14:35.348 11:33:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:35.348 11:33:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:35.348 11:33:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:35.348 11:33:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:35.348 11:33:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:35.348 11:33:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:35.348 11:33:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:35.348 11:33:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:35.348 11:33:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:35.348 11:33:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:35.348 11:33:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:891080d4-f96c-4735-b9e2-e3ce9892e421 00:14:35.348 11:33:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=891080d4-f96c-4735-b9e2-e3ce9892e421 00:14:35.348 11:33:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:35.348 11:33:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:35.348 11:33:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:35.348 11:33:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:35.348 11:33:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:35.348 11:33:12 nvmf_tcp.nvmf_tls -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:35.348 11:33:12 nvmf_tcp.nvmf_tls -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:35.348 11:33:12 nvmf_tcp.nvmf_tls -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:35.348 11:33:12 nvmf_tcp.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:35.348 11:33:12 nvmf_tcp.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:35.348 11:33:12 nvmf_tcp.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:35.348 11:33:12 nvmf_tcp.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:14:35.348 11:33:12 nvmf_tcp.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:35.348 11:33:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@47 -- # : 0 00:14:35.348 11:33:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:35.348 11:33:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:35.348 11:33:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:35.348 11:33:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:35.348 11:33:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:35.348 11:33:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:35.348 11:33:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:35.348 11:33:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:35.348 11:33:12 nvmf_tcp.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:35.348 11:33:12 nvmf_tcp.nvmf_tls -- target/tls.sh@62 -- # nvmftestinit 00:14:35.348 11:33:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:35.348 11:33:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:35.348 11:33:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:35.348 11:33:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:35.348 11:33:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:35.348 11:33:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:35.348 11:33:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:35.348 11:33:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:35.348 11:33:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:14:35.348 11:33:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:14:35.348 11:33:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:14:35.348 11:33:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:14:35.348 11:33:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:14:35.348 11:33:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@432 -- # nvmf_veth_init 00:14:35.348 11:33:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:35.348 11:33:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:35.348 11:33:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:14:35.348 11:33:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:14:35.348 11:33:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:14:35.348 11:33:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:14:35.348 11:33:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:14:35.348 11:33:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:35.348 11:33:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:14:35.348 11:33:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:14:35.348 11:33:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:14:35.348 11:33:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:14:35.348 11:33:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:14:35.348 11:33:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:14:35.348 Cannot find device "nvmf_tgt_br" 00:14:35.348 11:33:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@155 -- # true 00:14:35.348 11:33:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:14:35.348 Cannot find device "nvmf_tgt_br2" 00:14:35.348 11:33:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@156 -- # true 00:14:35.348 11:33:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:14:35.348 11:33:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:14:35.348 Cannot find device "nvmf_tgt_br" 00:14:35.348 11:33:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@158 -- # true 00:14:35.348 11:33:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:14:35.348 Cannot find device "nvmf_tgt_br2" 00:14:35.348 11:33:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@159 -- # true 00:14:35.348 11:33:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:14:35.348 11:33:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:14:35.348 11:33:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:35.348 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:35.348 11:33:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@162 -- # true 00:14:35.348 11:33:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:35.348 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:35.348 11:33:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@163 -- # true 00:14:35.348 11:33:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:14:35.348 11:33:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:14:35.348 11:33:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:14:35.348 11:33:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:14:35.348 11:33:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:14:35.606 11:33:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:14:35.606 11:33:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:14:35.606 11:33:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:14:35.606 11:33:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:14:35.606 11:33:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:14:35.606 11:33:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:14:35.606 11:33:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:14:35.606 11:33:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:14:35.606 11:33:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:14:35.606 11:33:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:14:35.606 11:33:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:14:35.606 11:33:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:14:35.607 11:33:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:14:35.607 11:33:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:14:35.607 11:33:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:14:35.607 11:33:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:14:35.607 11:33:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:14:35.607 11:33:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:14:35.607 11:33:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:14:35.607 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:35.607 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.087 ms 00:14:35.607 00:14:35.607 --- 10.0.0.2 ping statistics --- 00:14:35.607 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:35.607 rtt min/avg/max/mdev = 0.087/0.087/0.087/0.000 ms 00:14:35.607 11:33:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:14:35.607 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:14:35.607 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.055 ms 00:14:35.607 00:14:35.607 --- 10.0.0.3 ping statistics --- 00:14:35.607 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:35.607 rtt min/avg/max/mdev = 0.055/0.055/0.055/0.000 ms 00:14:35.607 11:33:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:14:35.607 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:35.607 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.031 ms 00:14:35.607 00:14:35.607 --- 10.0.0.1 ping statistics --- 00:14:35.607 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:35.607 rtt min/avg/max/mdev = 0.031/0.031/0.031/0.000 ms 00:14:35.607 11:33:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:35.607 11:33:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@433 -- # return 0 00:14:35.607 11:33:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:35.607 11:33:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:35.607 11:33:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:35.607 11:33:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:35.607 11:33:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:35.607 11:33:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:35.607 11:33:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:35.607 11:33:12 nvmf_tcp.nvmf_tls -- target/tls.sh@63 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:14:35.607 11:33:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:35.607 11:33:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:35.607 11:33:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:35.607 11:33:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=83485 00:14:35.607 11:33:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 83485 00:14:35.607 11:33:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:14:35.607 11:33:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 83485 ']' 00:14:35.607 11:33:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:35.607 11:33:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:35.607 11:33:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:35.607 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:35.607 11:33:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:35.607 11:33:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:35.607 [2024-07-15 11:33:13.060093] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:14:35.607 [2024-07-15 11:33:13.060186] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:35.864 [2024-07-15 11:33:13.196484] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:35.864 [2024-07-15 11:33:13.272516] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:35.864 [2024-07-15 11:33:13.272590] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:35.864 [2024-07-15 11:33:13.272610] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:35.864 [2024-07-15 11:33:13.272625] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:35.864 [2024-07-15 11:33:13.272637] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:35.864 [2024-07-15 11:33:13.272680] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:36.793 11:33:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:36.793 11:33:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:14:36.793 11:33:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:36.793 11:33:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:36.793 11:33:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:36.793 11:33:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:36.793 11:33:14 nvmf_tcp.nvmf_tls -- target/tls.sh@65 -- # '[' tcp '!=' tcp ']' 00:14:36.793 11:33:14 nvmf_tcp.nvmf_tls -- target/tls.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:14:37.050 true 00:14:37.050 11:33:14 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # jq -r .tls_version 00:14:37.050 11:33:14 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:14:37.309 11:33:14 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # version=0 00:14:37.309 11:33:14 nvmf_tcp.nvmf_tls -- target/tls.sh@74 -- # [[ 0 != \0 ]] 00:14:37.309 11:33:14 nvmf_tcp.nvmf_tls -- target/tls.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:14:37.566 11:33:14 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:14:37.566 11:33:14 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # jq -r .tls_version 00:14:37.823 11:33:15 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # version=13 00:14:37.823 11:33:15 nvmf_tcp.nvmf_tls -- target/tls.sh@82 -- # [[ 13 != \1\3 ]] 00:14:37.823 11:33:15 nvmf_tcp.nvmf_tls -- target/tls.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:14:38.081 11:33:15 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:14:38.081 11:33:15 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # jq -r .tls_version 00:14:38.647 11:33:15 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # version=7 00:14:38.647 11:33:15 nvmf_tcp.nvmf_tls -- target/tls.sh@90 -- # [[ 7 != \7 ]] 00:14:38.647 11:33:15 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:14:38.647 11:33:15 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # jq -r .enable_ktls 00:14:38.904 11:33:16 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # ktls=false 00:14:38.904 11:33:16 nvmf_tcp.nvmf_tls -- target/tls.sh@97 -- # [[ false != \f\a\l\s\e ]] 00:14:38.905 11:33:16 nvmf_tcp.nvmf_tls -- target/tls.sh@103 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:14:39.163 11:33:16 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:14:39.163 11:33:16 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # jq -r .enable_ktls 00:14:39.421 11:33:16 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # ktls=true 00:14:39.421 11:33:16 nvmf_tcp.nvmf_tls -- target/tls.sh@105 -- # [[ true != \t\r\u\e ]] 00:14:39.421 11:33:16 nvmf_tcp.nvmf_tls -- target/tls.sh@111 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:14:39.679 11:33:17 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:14:39.679 11:33:17 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # jq -r .enable_ktls 00:14:39.937 11:33:17 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # ktls=false 00:14:39.937 11:33:17 nvmf_tcp.nvmf_tls -- target/tls.sh@113 -- # [[ false != \f\a\l\s\e ]] 00:14:39.937 11:33:17 nvmf_tcp.nvmf_tls -- target/tls.sh@118 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:14:39.937 11:33:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:14:39.937 11:33:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:14:39.937 11:33:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:14:39.937 11:33:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:14:39.937 11:33:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:14:39.937 11:33:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:14:39.937 11:33:17 nvmf_tcp.nvmf_tls -- target/tls.sh@118 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:14:39.937 11:33:17 nvmf_tcp.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:14:39.937 11:33:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:14:39.937 11:33:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:14:39.937 11:33:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:14:39.937 11:33:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=ffeeddccbbaa99887766554433221100 00:14:39.937 11:33:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:14:39.937 11:33:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:14:40.196 11:33:17 nvmf_tcp.nvmf_tls -- target/tls.sh@119 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:14:40.196 11:33:17 nvmf_tcp.nvmf_tls -- target/tls.sh@121 -- # mktemp 00:14:40.196 11:33:17 nvmf_tcp.nvmf_tls -- target/tls.sh@121 -- # key_path=/tmp/tmp.O0n7R8f0NM 00:14:40.196 11:33:17 nvmf_tcp.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:14:40.196 11:33:17 nvmf_tcp.nvmf_tls -- target/tls.sh@122 -- # key_2_path=/tmp/tmp.Mi1Q8cSztT 00:14:40.196 11:33:17 nvmf_tcp.nvmf_tls -- target/tls.sh@124 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:14:40.196 11:33:17 nvmf_tcp.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:14:40.196 11:33:17 nvmf_tcp.nvmf_tls -- target/tls.sh@127 -- # chmod 0600 /tmp/tmp.O0n7R8f0NM 00:14:40.196 11:33:17 nvmf_tcp.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.Mi1Q8cSztT 00:14:40.196 11:33:17 nvmf_tcp.nvmf_tls -- target/tls.sh@130 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:14:40.454 11:33:17 nvmf_tcp.nvmf_tls -- target/tls.sh@131 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:14:40.712 11:33:18 nvmf_tcp.nvmf_tls -- target/tls.sh@133 -- # setup_nvmf_tgt /tmp/tmp.O0n7R8f0NM 00:14:40.712 11:33:18 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.O0n7R8f0NM 00:14:40.712 11:33:18 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:14:40.970 [2024-07-15 11:33:18.393353] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:40.970 11:33:18 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:14:41.537 11:33:18 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:14:41.537 [2024-07-15 11:33:18.997477] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:14:41.537 [2024-07-15 11:33:18.997712] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:41.796 11:33:19 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:14:42.054 malloc0 00:14:42.054 11:33:19 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:14:42.312 11:33:19 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.O0n7R8f0NM 00:14:42.570 [2024-07-15 11:33:19.892709] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:14:42.571 11:33:19 nvmf_tcp.nvmf_tls -- target/tls.sh@137 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.O0n7R8f0NM 00:14:54.765 Initializing NVMe Controllers 00:14:54.765 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:14:54.765 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:14:54.765 Initialization complete. Launching workers. 00:14:54.765 ======================================================== 00:14:54.765 Latency(us) 00:14:54.765 Device Information : IOPS MiB/s Average min max 00:14:54.765 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 9595.29 37.48 6671.91 1356.94 11051.79 00:14:54.765 ======================================================== 00:14:54.765 Total : 9595.29 37.48 6671.91 1356.94 11051.79 00:14:54.765 00:14:54.765 11:33:30 nvmf_tcp.nvmf_tls -- target/tls.sh@143 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.O0n7R8f0NM 00:14:54.765 11:33:30 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:14:54.765 11:33:30 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:14:54.765 11:33:30 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:14:54.765 11:33:30 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.O0n7R8f0NM' 00:14:54.765 11:33:30 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:14:54.765 11:33:30 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=83853 00:14:54.766 11:33:30 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:14:54.766 11:33:30 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:14:54.766 11:33:30 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 83853 /var/tmp/bdevperf.sock 00:14:54.766 11:33:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 83853 ']' 00:14:54.766 11:33:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:54.766 11:33:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:54.766 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:54.766 11:33:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:54.766 11:33:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:54.766 11:33:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:54.766 [2024-07-15 11:33:30.164771] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:14:54.766 [2024-07-15 11:33:30.164861] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83853 ] 00:14:54.766 [2024-07-15 11:33:30.297774] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:54.766 [2024-07-15 11:33:30.358571] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:14:54.766 11:33:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:54.766 11:33:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:14:54.766 11:33:30 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.O0n7R8f0NM 00:14:54.766 [2024-07-15 11:33:30.745008] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:14:54.766 [2024-07-15 11:33:30.745135] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:14:54.766 TLSTESTn1 00:14:54.766 11:33:30 nvmf_tcp.nvmf_tls -- target/tls.sh@41 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:14:54.766 Running I/O for 10 seconds... 00:15:04.727 00:15:04.727 Latency(us) 00:15:04.727 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:04.727 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:15:04.727 Verification LBA range: start 0x0 length 0x2000 00:15:04.727 TLSTESTn1 : 10.02 3724.26 14.55 0.00 0.00 34301.24 7208.96 35746.91 00:15:04.727 =================================================================================================================== 00:15:04.727 Total : 3724.26 14.55 0.00 0.00 34301.24 7208.96 35746.91 00:15:04.727 0 00:15:04.727 11:33:41 nvmf_tcp.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:15:04.727 11:33:41 nvmf_tcp.nvmf_tls -- target/tls.sh@45 -- # killprocess 83853 00:15:04.727 11:33:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 83853 ']' 00:15:04.727 11:33:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 83853 00:15:04.727 11:33:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:15:04.727 11:33:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:04.727 11:33:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 83853 00:15:04.727 killing process with pid 83853 00:15:04.727 Received shutdown signal, test time was about 10.000000 seconds 00:15:04.727 00:15:04.727 Latency(us) 00:15:04.727 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:04.727 =================================================================================================================== 00:15:04.727 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:04.727 11:33:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:15:04.727 11:33:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:15:04.727 11:33:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 83853' 00:15:04.727 11:33:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 83853 00:15:04.727 [2024-07-15 11:33:41.058384] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:15:04.727 11:33:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 83853 00:15:04.727 11:33:41 nvmf_tcp.nvmf_tls -- target/tls.sh@146 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.Mi1Q8cSztT 00:15:04.727 11:33:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:15:04.727 11:33:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.Mi1Q8cSztT 00:15:04.727 11:33:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:15:04.727 11:33:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:04.727 11:33:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:15:04.727 11:33:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:04.727 11:33:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.Mi1Q8cSztT 00:15:04.727 11:33:41 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:15:04.727 11:33:41 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:15:04.727 11:33:41 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:15:04.727 11:33:41 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.Mi1Q8cSztT' 00:15:04.727 11:33:41 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:15:04.727 11:33:41 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=83986 00:15:04.727 11:33:41 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:15:04.727 11:33:41 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:15:04.727 11:33:41 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 83986 /var/tmp/bdevperf.sock 00:15:04.727 11:33:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 83986 ']' 00:15:04.727 11:33:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:04.727 11:33:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:04.727 11:33:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:04.727 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:04.727 11:33:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:04.727 11:33:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:04.727 [2024-07-15 11:33:41.273992] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:15:04.727 [2024-07-15 11:33:41.274354] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83986 ] 00:15:04.727 [2024-07-15 11:33:41.410086] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:04.727 [2024-07-15 11:33:41.469152] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:15:04.986 11:33:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:04.986 11:33:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:15:04.986 11:33:42 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.Mi1Q8cSztT 00:15:05.244 [2024-07-15 11:33:42.712462] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:15:05.244 [2024-07-15 11:33:42.712613] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:15:05.504 [2024-07-15 11:33:42.720569] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:15:05.504 [2024-07-15 11:33:42.721358] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23e2ca0 (107): Transport endpoint is not connected 00:15:05.504 [2024-07-15 11:33:42.722333] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23e2ca0 (9): Bad file descriptor 00:15:05.504 [2024-07-15 11:33:42.723328] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:15:05.504 [2024-07-15 11:33:42.723369] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:15:05.504 [2024-07-15 11:33:42.723399] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:15:05.504 2024/07/15 11:33:42 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostnqn:nqn.2016-06.io.spdk:host1 name:TLSTEST prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) psk:/tmp/tmp.Mi1Q8cSztT subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:15:05.504 request: 00:15:05.504 { 00:15:05.504 "method": "bdev_nvme_attach_controller", 00:15:05.504 "params": { 00:15:05.504 "name": "TLSTEST", 00:15:05.504 "trtype": "tcp", 00:15:05.504 "traddr": "10.0.0.2", 00:15:05.504 "adrfam": "ipv4", 00:15:05.504 "trsvcid": "4420", 00:15:05.504 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:05.504 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:05.504 "prchk_reftag": false, 00:15:05.504 "prchk_guard": false, 00:15:05.504 "hdgst": false, 00:15:05.504 "ddgst": false, 00:15:05.504 "psk": "/tmp/tmp.Mi1Q8cSztT" 00:15:05.504 } 00:15:05.504 } 00:15:05.504 Got JSON-RPC error response 00:15:05.504 GoRPCClient: error on JSON-RPC call 00:15:05.504 11:33:42 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 83986 00:15:05.504 11:33:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 83986 ']' 00:15:05.504 11:33:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 83986 00:15:05.504 11:33:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:15:05.504 11:33:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:05.504 11:33:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 83986 00:15:05.504 killing process with pid 83986 00:15:05.504 Received shutdown signal, test time was about 10.000000 seconds 00:15:05.504 00:15:05.504 Latency(us) 00:15:05.504 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:05.504 =================================================================================================================== 00:15:05.504 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:15:05.504 11:33:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:15:05.504 11:33:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:15:05.504 11:33:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 83986' 00:15:05.504 11:33:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 83986 00:15:05.504 [2024-07-15 11:33:42.779983] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:15:05.504 11:33:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 83986 00:15:05.504 11:33:42 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:15:05.504 11:33:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:15:05.504 11:33:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:15:05.504 11:33:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:15:05.504 11:33:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:15:05.504 11:33:42 nvmf_tcp.nvmf_tls -- target/tls.sh@149 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.O0n7R8f0NM 00:15:05.504 11:33:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:15:05.504 11:33:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.O0n7R8f0NM 00:15:05.504 11:33:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:15:05.504 11:33:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:05.504 11:33:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:15:05.504 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:05.504 11:33:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:05.504 11:33:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.O0n7R8f0NM 00:15:05.504 11:33:42 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:15:05.504 11:33:42 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:15:05.504 11:33:42 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:15:05.504 11:33:42 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.O0n7R8f0NM' 00:15:05.504 11:33:42 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:15:05.504 11:33:42 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=84031 00:15:05.504 11:33:42 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:15:05.504 11:33:42 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 84031 /var/tmp/bdevperf.sock 00:15:05.504 11:33:42 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:15:05.504 11:33:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 84031 ']' 00:15:05.504 11:33:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:05.504 11:33:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:05.504 11:33:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:05.504 11:33:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:05.504 11:33:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:05.763 [2024-07-15 11:33:43.023406] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:15:05.763 [2024-07-15 11:33:43.024343] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84031 ] 00:15:05.763 [2024-07-15 11:33:43.164835] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:05.763 [2024-07-15 11:33:43.232450] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:15:06.696 11:33:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:06.696 11:33:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:15:06.696 11:33:43 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk /tmp/tmp.O0n7R8f0NM 00:15:06.955 [2024-07-15 11:33:44.218001] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:15:06.955 [2024-07-15 11:33:44.218126] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:15:06.955 [2024-07-15 11:33:44.223078] tcp.c: 881:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:15:06.955 [2024-07-15 11:33:44.223141] posix.c: 589:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:15:06.955 [2024-07-15 11:33:44.223204] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:15:06.955 [2024-07-15 11:33:44.223739] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x153bca0 (107): Transport endpoint is not connected 00:15:06.955 [2024-07-15 11:33:44.224721] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x153bca0 (9): Bad file descriptor 00:15:06.955 [2024-07-15 11:33:44.225717] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:15:06.955 [2024-07-15 11:33:44.225743] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:15:06.955 [2024-07-15 11:33:44.225759] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:15:06.955 2024/07/15 11:33:44 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostnqn:nqn.2016-06.io.spdk:host2 name:TLSTEST prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) psk:/tmp/tmp.O0n7R8f0NM subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:15:06.955 request: 00:15:06.955 { 00:15:06.955 "method": "bdev_nvme_attach_controller", 00:15:06.955 "params": { 00:15:06.955 "name": "TLSTEST", 00:15:06.955 "trtype": "tcp", 00:15:06.955 "traddr": "10.0.0.2", 00:15:06.955 "adrfam": "ipv4", 00:15:06.955 "trsvcid": "4420", 00:15:06.955 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:06.955 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:15:06.955 "prchk_reftag": false, 00:15:06.955 "prchk_guard": false, 00:15:06.955 "hdgst": false, 00:15:06.955 "ddgst": false, 00:15:06.955 "psk": "/tmp/tmp.O0n7R8f0NM" 00:15:06.955 } 00:15:06.955 } 00:15:06.955 Got JSON-RPC error response 00:15:06.955 GoRPCClient: error on JSON-RPC call 00:15:06.955 11:33:44 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 84031 00:15:06.955 11:33:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 84031 ']' 00:15:06.955 11:33:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 84031 00:15:06.955 11:33:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:15:06.955 11:33:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:06.955 11:33:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 84031 00:15:06.955 killing process with pid 84031 00:15:06.955 Received shutdown signal, test time was about 10.000000 seconds 00:15:06.955 00:15:06.955 Latency(us) 00:15:06.955 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:06.955 =================================================================================================================== 00:15:06.955 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:15:06.955 11:33:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:15:06.955 11:33:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:15:06.955 11:33:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 84031' 00:15:06.955 11:33:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 84031 00:15:06.955 [2024-07-15 11:33:44.277510] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:15:06.955 11:33:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 84031 00:15:07.214 11:33:44 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:15:07.214 11:33:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:15:07.214 11:33:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:15:07.214 11:33:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:15:07.214 11:33:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:15:07.214 11:33:44 nvmf_tcp.nvmf_tls -- target/tls.sh@152 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.O0n7R8f0NM 00:15:07.214 11:33:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:15:07.214 11:33:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.O0n7R8f0NM 00:15:07.214 11:33:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:15:07.214 11:33:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:07.214 11:33:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:15:07.214 11:33:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:07.214 11:33:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.O0n7R8f0NM 00:15:07.214 11:33:44 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:15:07.214 11:33:44 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:15:07.214 11:33:44 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:15:07.214 11:33:44 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.O0n7R8f0NM' 00:15:07.214 11:33:44 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:15:07.214 11:33:44 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=84078 00:15:07.214 11:33:44 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:15:07.214 11:33:44 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:15:07.214 11:33:44 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 84078 /var/tmp/bdevperf.sock 00:15:07.214 11:33:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 84078 ']' 00:15:07.214 11:33:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:07.214 11:33:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:07.214 11:33:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:07.214 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:07.214 11:33:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:07.214 11:33:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:07.214 [2024-07-15 11:33:44.495868] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:15:07.214 [2024-07-15 11:33:44.495975] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84078 ] 00:15:07.214 [2024-07-15 11:33:44.627910] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:07.473 [2024-07-15 11:33:44.715864] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:15:08.408 11:33:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:08.408 11:33:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:15:08.408 11:33:45 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.O0n7R8f0NM 00:15:08.408 [2024-07-15 11:33:45.803021] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:15:08.408 [2024-07-15 11:33:45.803161] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:15:08.408 [2024-07-15 11:33:45.810736] tcp.c: 881:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:15:08.408 [2024-07-15 11:33:45.810807] posix.c: 589:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:15:08.408 [2024-07-15 11:33:45.810938] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:15:08.408 [2024-07-15 11:33:45.811887] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61bca0 (107): Transport endpoint is not connected 00:15:08.408 [2024-07-15 11:33:45.812867] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61bca0 (9): Bad file descriptor 00:15:08.408 [2024-07-15 11:33:45.813861] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:15:08.408 [2024-07-15 11:33:45.813887] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:15:08.408 [2024-07-15 11:33:45.813903] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:15:08.408 2024/07/15 11:33:45 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostnqn:nqn.2016-06.io.spdk:host1 name:TLSTEST prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) psk:/tmp/tmp.O0n7R8f0NM subnqn:nqn.2016-06.io.spdk:cnode2 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:15:08.408 request: 00:15:08.408 { 00:15:08.408 "method": "bdev_nvme_attach_controller", 00:15:08.408 "params": { 00:15:08.408 "name": "TLSTEST", 00:15:08.408 "trtype": "tcp", 00:15:08.408 "traddr": "10.0.0.2", 00:15:08.408 "adrfam": "ipv4", 00:15:08.408 "trsvcid": "4420", 00:15:08.408 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:15:08.408 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:08.408 "prchk_reftag": false, 00:15:08.409 "prchk_guard": false, 00:15:08.409 "hdgst": false, 00:15:08.409 "ddgst": false, 00:15:08.409 "psk": "/tmp/tmp.O0n7R8f0NM" 00:15:08.409 } 00:15:08.409 } 00:15:08.409 Got JSON-RPC error response 00:15:08.409 GoRPCClient: error on JSON-RPC call 00:15:08.409 11:33:45 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 84078 00:15:08.409 11:33:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 84078 ']' 00:15:08.409 11:33:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 84078 00:15:08.409 11:33:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:15:08.409 11:33:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:08.409 11:33:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 84078 00:15:08.409 killing process with pid 84078 00:15:08.409 Received shutdown signal, test time was about 10.000000 seconds 00:15:08.409 00:15:08.409 Latency(us) 00:15:08.409 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:08.409 =================================================================================================================== 00:15:08.409 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:15:08.409 11:33:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:15:08.409 11:33:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:15:08.409 11:33:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 84078' 00:15:08.409 11:33:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 84078 00:15:08.409 [2024-07-15 11:33:45.866986] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:15:08.409 11:33:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 84078 00:15:08.668 11:33:46 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:15:08.668 11:33:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:15:08.668 11:33:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:15:08.668 11:33:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:15:08.668 11:33:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:15:08.668 11:33:46 nvmf_tcp.nvmf_tls -- target/tls.sh@155 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:15:08.668 11:33:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:15:08.668 11:33:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:15:08.668 11:33:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:15:08.668 11:33:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:08.668 11:33:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:15:08.668 11:33:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:08.668 11:33:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:15:08.668 11:33:46 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:15:08.668 11:33:46 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:15:08.668 11:33:46 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:15:08.668 11:33:46 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk= 00:15:08.668 11:33:46 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:15:08.668 11:33:46 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=84124 00:15:08.668 11:33:46 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:15:08.668 11:33:46 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:15:08.668 11:33:46 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 84124 /var/tmp/bdevperf.sock 00:15:08.668 11:33:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 84124 ']' 00:15:08.668 11:33:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:08.668 11:33:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:08.668 11:33:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:08.668 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:08.668 11:33:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:08.668 11:33:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:08.668 [2024-07-15 11:33:46.123416] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:15:08.668 [2024-07-15 11:33:46.123593] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84124 ] 00:15:08.926 [2024-07-15 11:33:46.276749] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:08.927 [2024-07-15 11:33:46.347615] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:15:09.190 11:33:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:09.190 11:33:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:15:09.190 11:33:46 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:15:09.447 [2024-07-15 11:33:46.671966] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:15:09.447 [2024-07-15 11:33:46.673859] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1586240 (9): Bad file descriptor 00:15:09.447 [2024-07-15 11:33:46.674849] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:15:09.447 [2024-07-15 11:33:46.674882] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:15:09.447 [2024-07-15 11:33:46.674902] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:15:09.447 2024/07/15 11:33:46 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostnqn:nqn.2016-06.io.spdk:host1 name:TLSTEST prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:15:09.447 request: 00:15:09.447 { 00:15:09.447 "method": "bdev_nvme_attach_controller", 00:15:09.447 "params": { 00:15:09.447 "name": "TLSTEST", 00:15:09.447 "trtype": "tcp", 00:15:09.447 "traddr": "10.0.0.2", 00:15:09.447 "adrfam": "ipv4", 00:15:09.447 "trsvcid": "4420", 00:15:09.447 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:09.447 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:09.447 "prchk_reftag": false, 00:15:09.447 "prchk_guard": false, 00:15:09.447 "hdgst": false, 00:15:09.447 "ddgst": false 00:15:09.447 } 00:15:09.447 } 00:15:09.447 Got JSON-RPC error response 00:15:09.447 GoRPCClient: error on JSON-RPC call 00:15:09.447 11:33:46 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 84124 00:15:09.447 11:33:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 84124 ']' 00:15:09.447 11:33:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 84124 00:15:09.447 11:33:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:15:09.447 11:33:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:09.447 11:33:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 84124 00:15:09.447 11:33:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:15:09.447 11:33:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:15:09.447 killing process with pid 84124 00:15:09.447 11:33:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 84124' 00:15:09.447 11:33:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 84124 00:15:09.447 Received shutdown signal, test time was about 10.000000 seconds 00:15:09.447 00:15:09.447 Latency(us) 00:15:09.447 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:09.447 =================================================================================================================== 00:15:09.447 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:15:09.447 11:33:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 84124 00:15:09.447 11:33:46 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:15:09.448 11:33:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:15:09.448 11:33:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:15:09.448 11:33:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:15:09.448 11:33:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:15:09.448 11:33:46 nvmf_tcp.nvmf_tls -- target/tls.sh@158 -- # killprocess 83485 00:15:09.448 11:33:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 83485 ']' 00:15:09.448 11:33:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 83485 00:15:09.448 11:33:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:15:09.448 11:33:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:09.448 11:33:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 83485 00:15:09.704 11:33:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:15:09.704 killing process with pid 83485 00:15:09.704 11:33:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:15:09.704 11:33:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 83485' 00:15:09.704 11:33:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 83485 00:15:09.704 [2024-07-15 11:33:46.929382] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:15:09.704 11:33:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 83485 00:15:09.704 11:33:47 nvmf_tcp.nvmf_tls -- target/tls.sh@159 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:15:09.705 11:33:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:15:09.705 11:33:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:15:09.705 11:33:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:15:09.705 11:33:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:15:09.705 11:33:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=2 00:15:09.705 11:33:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:15:09.705 11:33:47 nvmf_tcp.nvmf_tls -- target/tls.sh@159 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:15:09.705 11:33:47 nvmf_tcp.nvmf_tls -- target/tls.sh@160 -- # mktemp 00:15:09.705 11:33:47 nvmf_tcp.nvmf_tls -- target/tls.sh@160 -- # key_long_path=/tmp/tmp.6tWS54vE6Q 00:15:09.705 11:33:47 nvmf_tcp.nvmf_tls -- target/tls.sh@161 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:15:09.705 11:33:47 nvmf_tcp.nvmf_tls -- target/tls.sh@162 -- # chmod 0600 /tmp/tmp.6tWS54vE6Q 00:15:09.705 11:33:47 nvmf_tcp.nvmf_tls -- target/tls.sh@163 -- # nvmfappstart -m 0x2 00:15:09.705 11:33:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:09.705 11:33:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:09.705 11:33:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:09.705 11:33:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=84166 00:15:09.705 11:33:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:15:09.705 11:33:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 84166 00:15:09.961 11:33:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 84166 ']' 00:15:09.961 11:33:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:09.961 11:33:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:09.961 11:33:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:09.961 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:09.961 11:33:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:09.961 11:33:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:09.961 [2024-07-15 11:33:47.259912] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:15:09.961 [2024-07-15 11:33:47.260033] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:09.961 [2024-07-15 11:33:47.409789] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:10.218 [2024-07-15 11:33:47.480700] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:10.218 [2024-07-15 11:33:47.480766] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:10.218 [2024-07-15 11:33:47.480780] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:10.218 [2024-07-15 11:33:47.480791] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:10.218 [2024-07-15 11:33:47.480800] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:10.218 [2024-07-15 11:33:47.480841] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:11.150 11:33:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:11.150 11:33:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:15:11.150 11:33:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:11.150 11:33:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:11.150 11:33:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:11.150 11:33:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:11.150 11:33:48 nvmf_tcp.nvmf_tls -- target/tls.sh@165 -- # setup_nvmf_tgt /tmp/tmp.6tWS54vE6Q 00:15:11.150 11:33:48 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.6tWS54vE6Q 00:15:11.150 11:33:48 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:15:11.150 [2024-07-15 11:33:48.583437] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:11.150 11:33:48 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:15:11.407 11:33:48 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:15:11.974 [2024-07-15 11:33:49.143610] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:15:11.974 [2024-07-15 11:33:49.143859] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:11.974 11:33:49 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:15:12.232 malloc0 00:15:12.232 11:33:49 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:15:12.490 11:33:49 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.6tWS54vE6Q 00:15:12.748 [2024-07-15 11:33:50.070608] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:15:12.748 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:12.748 11:33:50 nvmf_tcp.nvmf_tls -- target/tls.sh@167 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.6tWS54vE6Q 00:15:12.748 11:33:50 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:15:12.748 11:33:50 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:15:12.748 11:33:50 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:15:12.748 11:33:50 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.6tWS54vE6Q' 00:15:12.748 11:33:50 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:15:12.748 11:33:50 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=84270 00:15:12.748 11:33:50 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:15:12.748 11:33:50 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:15:12.748 11:33:50 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 84270 /var/tmp/bdevperf.sock 00:15:12.748 11:33:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 84270 ']' 00:15:12.748 11:33:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:12.748 11:33:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:12.748 11:33:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:12.748 11:33:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:12.748 11:33:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:12.748 [2024-07-15 11:33:50.161481] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:15:12.749 [2024-07-15 11:33:50.161633] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84270 ] 00:15:13.006 [2024-07-15 11:33:50.305891] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:13.006 [2024-07-15 11:33:50.390337] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:15:13.006 11:33:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:13.006 11:33:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:15:13.006 11:33:50 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.6tWS54vE6Q 00:15:13.265 [2024-07-15 11:33:50.735888] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:15:13.265 [2024-07-15 11:33:50.736011] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:15:13.523 TLSTESTn1 00:15:13.523 11:33:50 nvmf_tcp.nvmf_tls -- target/tls.sh@41 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:15:13.523 Running I/O for 10 seconds... 00:15:25.723 00:15:25.723 Latency(us) 00:15:25.723 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:25.723 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:15:25.723 Verification LBA range: start 0x0 length 0x2000 00:15:25.723 TLSTESTn1 : 10.02 3756.70 14.67 0.00 0.00 34005.23 7477.06 28716.68 00:15:25.723 =================================================================================================================== 00:15:25.723 Total : 3756.70 14.67 0.00 0.00 34005.23 7477.06 28716.68 00:15:25.723 0 00:15:25.723 11:34:00 nvmf_tcp.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:15:25.723 11:34:00 nvmf_tcp.nvmf_tls -- target/tls.sh@45 -- # killprocess 84270 00:15:25.723 11:34:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 84270 ']' 00:15:25.723 11:34:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 84270 00:15:25.723 11:34:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:15:25.723 11:34:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:25.723 11:34:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 84270 00:15:25.723 killing process with pid 84270 00:15:25.723 Received shutdown signal, test time was about 10.000000 seconds 00:15:25.723 00:15:25.723 Latency(us) 00:15:25.724 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:25.724 =================================================================================================================== 00:15:25.724 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:25.724 11:34:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:15:25.724 11:34:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:15:25.724 11:34:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 84270' 00:15:25.724 11:34:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 84270 00:15:25.724 [2024-07-15 11:34:01.014243] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:15:25.724 11:34:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 84270 00:15:25.724 11:34:01 nvmf_tcp.nvmf_tls -- target/tls.sh@170 -- # chmod 0666 /tmp/tmp.6tWS54vE6Q 00:15:25.724 11:34:01 nvmf_tcp.nvmf_tls -- target/tls.sh@171 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.6tWS54vE6Q 00:15:25.724 11:34:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:15:25.724 11:34:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.6tWS54vE6Q 00:15:25.724 11:34:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:15:25.724 11:34:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:25.724 11:34:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:15:25.724 11:34:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:25.724 11:34:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.6tWS54vE6Q 00:15:25.724 11:34:01 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:15:25.724 11:34:01 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:15:25.724 11:34:01 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:15:25.724 11:34:01 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.6tWS54vE6Q' 00:15:25.724 11:34:01 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:15:25.724 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:25.724 11:34:01 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=84404 00:15:25.724 11:34:01 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:15:25.724 11:34:01 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:15:25.724 11:34:01 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 84404 /var/tmp/bdevperf.sock 00:15:25.724 11:34:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 84404 ']' 00:15:25.724 11:34:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:25.724 11:34:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:25.724 11:34:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:25.724 11:34:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:25.724 11:34:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:25.724 [2024-07-15 11:34:01.261374] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:15:25.724 [2024-07-15 11:34:01.261490] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84404 ] 00:15:25.724 [2024-07-15 11:34:01.404682] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:25.724 [2024-07-15 11:34:01.475702] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:15:25.724 11:34:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:25.724 11:34:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:15:25.724 11:34:02 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.6tWS54vE6Q 00:15:25.724 [2024-07-15 11:34:02.587529] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:15:25.724 [2024-07-15 11:34:02.587662] bdev_nvme.c:6125:bdev_nvme_load_psk: *ERROR*: Incorrect permissions for PSK file 00:15:25.724 [2024-07-15 11:34:02.587681] bdev_nvme.c:6230:bdev_nvme_create: *ERROR*: Could not load PSK from /tmp/tmp.6tWS54vE6Q 00:15:25.724 2024/07/15 11:34:02 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostnqn:nqn.2016-06.io.spdk:host1 name:TLSTEST prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) psk:/tmp/tmp.6tWS54vE6Q subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-1 Msg=Operation not permitted 00:15:25.724 request: 00:15:25.724 { 00:15:25.724 "method": "bdev_nvme_attach_controller", 00:15:25.724 "params": { 00:15:25.724 "name": "TLSTEST", 00:15:25.724 "trtype": "tcp", 00:15:25.724 "traddr": "10.0.0.2", 00:15:25.724 "adrfam": "ipv4", 00:15:25.724 "trsvcid": "4420", 00:15:25.724 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:25.724 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:25.724 "prchk_reftag": false, 00:15:25.724 "prchk_guard": false, 00:15:25.724 "hdgst": false, 00:15:25.724 "ddgst": false, 00:15:25.724 "psk": "/tmp/tmp.6tWS54vE6Q" 00:15:25.724 } 00:15:25.724 } 00:15:25.724 Got JSON-RPC error response 00:15:25.724 GoRPCClient: error on JSON-RPC call 00:15:25.724 11:34:02 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 84404 00:15:25.724 11:34:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 84404 ']' 00:15:25.724 11:34:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 84404 00:15:25.724 11:34:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:15:25.724 11:34:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:25.724 11:34:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 84404 00:15:25.724 killing process with pid 84404 00:15:25.724 Received shutdown signal, test time was about 10.000000 seconds 00:15:25.724 00:15:25.724 Latency(us) 00:15:25.724 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:25.724 =================================================================================================================== 00:15:25.724 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:15:25.724 11:34:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:15:25.724 11:34:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:15:25.724 11:34:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 84404' 00:15:25.724 11:34:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 84404 00:15:25.724 11:34:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 84404 00:15:25.724 11:34:02 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:15:25.724 11:34:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:15:25.724 11:34:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:15:25.724 11:34:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:15:25.724 11:34:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:15:25.724 11:34:02 nvmf_tcp.nvmf_tls -- target/tls.sh@174 -- # killprocess 84166 00:15:25.724 11:34:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 84166 ']' 00:15:25.724 11:34:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 84166 00:15:25.724 11:34:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:15:25.724 11:34:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:25.724 11:34:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 84166 00:15:25.724 killing process with pid 84166 00:15:25.724 11:34:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:15:25.724 11:34:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:15:25.724 11:34:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 84166' 00:15:25.724 11:34:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 84166 00:15:25.724 [2024-07-15 11:34:02.817202] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:15:25.724 11:34:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 84166 00:15:25.724 11:34:02 nvmf_tcp.nvmf_tls -- target/tls.sh@175 -- # nvmfappstart -m 0x2 00:15:25.724 11:34:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:25.724 11:34:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:25.724 11:34:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:25.724 11:34:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=84460 00:15:25.724 11:34:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 84460 00:15:25.724 11:34:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 84460 ']' 00:15:25.724 11:34:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:15:25.724 11:34:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:25.724 11:34:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:25.724 11:34:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:25.724 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:25.724 11:34:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:25.724 11:34:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:25.724 [2024-07-15 11:34:03.079124] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:15:25.724 [2024-07-15 11:34:03.079268] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:25.982 [2024-07-15 11:34:03.214526] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:25.982 [2024-07-15 11:34:03.271642] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:25.982 [2024-07-15 11:34:03.271695] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:25.982 [2024-07-15 11:34:03.271706] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:25.982 [2024-07-15 11:34:03.271714] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:25.982 [2024-07-15 11:34:03.271721] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:25.982 [2024-07-15 11:34:03.271751] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:25.982 11:34:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:25.982 11:34:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:15:25.982 11:34:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:25.982 11:34:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:25.982 11:34:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:25.982 11:34:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:25.982 11:34:03 nvmf_tcp.nvmf_tls -- target/tls.sh@177 -- # NOT setup_nvmf_tgt /tmp/tmp.6tWS54vE6Q 00:15:25.982 11:34:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:15:25.982 11:34:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.6tWS54vE6Q 00:15:25.982 11:34:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=setup_nvmf_tgt 00:15:25.982 11:34:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:25.982 11:34:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t setup_nvmf_tgt 00:15:25.982 11:34:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:25.982 11:34:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # setup_nvmf_tgt /tmp/tmp.6tWS54vE6Q 00:15:25.982 11:34:03 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.6tWS54vE6Q 00:15:25.982 11:34:03 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:15:26.239 [2024-07-15 11:34:03.654170] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:26.239 11:34:03 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:15:26.497 11:34:03 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:15:26.754 [2024-07-15 11:34:04.182264] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:15:26.754 [2024-07-15 11:34:04.182472] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:26.754 11:34:04 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:15:27.012 malloc0 00:15:27.012 11:34:04 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:15:27.268 11:34:04 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.6tWS54vE6Q 00:15:27.525 [2024-07-15 11:34:04.933030] tcp.c:3589:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 00:15:27.525 [2024-07-15 11:34:04.933072] tcp.c:3675:nvmf_tcp_subsystem_add_host: *ERROR*: Could not retrieve PSK from file 00:15:27.525 [2024-07-15 11:34:04.933106] subsystem.c:1051:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:15:27.525 2024/07/15 11:34:04 error on JSON-RPC call, method: nvmf_subsystem_add_host, params: map[host:nqn.2016-06.io.spdk:host1 nqn:nqn.2016-06.io.spdk:cnode1 psk:/tmp/tmp.6tWS54vE6Q], err: error received for nvmf_subsystem_add_host method, err: Code=-32603 Msg=Internal error 00:15:27.525 request: 00:15:27.525 { 00:15:27.525 "method": "nvmf_subsystem_add_host", 00:15:27.525 "params": { 00:15:27.525 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:27.525 "host": "nqn.2016-06.io.spdk:host1", 00:15:27.525 "psk": "/tmp/tmp.6tWS54vE6Q" 00:15:27.525 } 00:15:27.525 } 00:15:27.525 Got JSON-RPC error response 00:15:27.525 GoRPCClient: error on JSON-RPC call 00:15:27.525 11:34:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:15:27.526 11:34:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:15:27.526 11:34:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:15:27.526 11:34:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:15:27.526 11:34:04 nvmf_tcp.nvmf_tls -- target/tls.sh@180 -- # killprocess 84460 00:15:27.526 11:34:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 84460 ']' 00:15:27.526 11:34:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 84460 00:15:27.526 11:34:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:15:27.526 11:34:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:27.526 11:34:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 84460 00:15:27.526 11:34:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:15:27.526 killing process with pid 84460 00:15:27.526 11:34:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:15:27.526 11:34:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 84460' 00:15:27.526 11:34:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 84460 00:15:27.526 11:34:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 84460 00:15:27.784 11:34:05 nvmf_tcp.nvmf_tls -- target/tls.sh@181 -- # chmod 0600 /tmp/tmp.6tWS54vE6Q 00:15:27.784 11:34:05 nvmf_tcp.nvmf_tls -- target/tls.sh@184 -- # nvmfappstart -m 0x2 00:15:27.784 11:34:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:27.784 11:34:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:27.784 11:34:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:27.784 11:34:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=84557 00:15:27.784 11:34:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:15:27.784 11:34:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 84557 00:15:27.784 11:34:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 84557 ']' 00:15:27.784 11:34:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:27.784 11:34:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:27.784 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:27.784 11:34:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:27.784 11:34:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:27.784 11:34:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:27.784 [2024-07-15 11:34:05.247512] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:15:27.784 [2024-07-15 11:34:05.247685] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:28.042 [2024-07-15 11:34:05.389016] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:28.042 [2024-07-15 11:34:05.474437] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:28.042 [2024-07-15 11:34:05.474515] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:28.042 [2024-07-15 11:34:05.474533] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:28.042 [2024-07-15 11:34:05.474568] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:28.042 [2024-07-15 11:34:05.474583] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:28.042 [2024-07-15 11:34:05.474621] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:28.300 11:34:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:28.300 11:34:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:15:28.300 11:34:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:28.300 11:34:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:28.300 11:34:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:28.300 11:34:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:28.300 11:34:05 nvmf_tcp.nvmf_tls -- target/tls.sh@185 -- # setup_nvmf_tgt /tmp/tmp.6tWS54vE6Q 00:15:28.300 11:34:05 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.6tWS54vE6Q 00:15:28.300 11:34:05 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:15:28.559 [2024-07-15 11:34:05.869799] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:28.559 11:34:05 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:15:28.817 11:34:06 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:15:29.076 [2024-07-15 11:34:06.369914] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:15:29.076 [2024-07-15 11:34:06.370137] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:29.076 11:34:06 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:15:29.334 malloc0 00:15:29.334 11:34:06 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:15:29.900 11:34:07 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.6tWS54vE6Q 00:15:30.168 [2024-07-15 11:34:07.380973] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:15:30.168 11:34:07 nvmf_tcp.nvmf_tls -- target/tls.sh@188 -- # bdevperf_pid=84643 00:15:30.168 11:34:07 nvmf_tcp.nvmf_tls -- target/tls.sh@187 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:15:30.168 11:34:07 nvmf_tcp.nvmf_tls -- target/tls.sh@190 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:15:30.168 11:34:07 nvmf_tcp.nvmf_tls -- target/tls.sh@191 -- # waitforlisten 84643 /var/tmp/bdevperf.sock 00:15:30.168 11:34:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 84643 ']' 00:15:30.168 11:34:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:30.168 11:34:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:30.168 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:30.168 11:34:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:30.168 11:34:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:30.168 11:34:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:30.168 [2024-07-15 11:34:07.451850] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:15:30.168 [2024-07-15 11:34:07.451980] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84643 ] 00:15:30.168 [2024-07-15 11:34:07.594769] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:30.436 [2024-07-15 11:34:07.683839] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:15:30.436 11:34:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:30.436 11:34:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:15:30.436 11:34:07 nvmf_tcp.nvmf_tls -- target/tls.sh@192 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.6tWS54vE6Q 00:15:30.695 [2024-07-15 11:34:08.099373] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:15:30.695 [2024-07-15 11:34:08.099497] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:15:30.695 TLSTESTn1 00:15:30.953 11:34:08 nvmf_tcp.nvmf_tls -- target/tls.sh@196 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:15:31.212 11:34:08 nvmf_tcp.nvmf_tls -- target/tls.sh@196 -- # tgtconf='{ 00:15:31.212 "subsystems": [ 00:15:31.212 { 00:15:31.212 "subsystem": "keyring", 00:15:31.212 "config": [] 00:15:31.212 }, 00:15:31.212 { 00:15:31.212 "subsystem": "iobuf", 00:15:31.212 "config": [ 00:15:31.212 { 00:15:31.212 "method": "iobuf_set_options", 00:15:31.212 "params": { 00:15:31.212 "large_bufsize": 135168, 00:15:31.212 "large_pool_count": 1024, 00:15:31.212 "small_bufsize": 8192, 00:15:31.212 "small_pool_count": 8192 00:15:31.212 } 00:15:31.212 } 00:15:31.212 ] 00:15:31.212 }, 00:15:31.212 { 00:15:31.212 "subsystem": "sock", 00:15:31.212 "config": [ 00:15:31.212 { 00:15:31.212 "method": "sock_set_default_impl", 00:15:31.212 "params": { 00:15:31.212 "impl_name": "posix" 00:15:31.212 } 00:15:31.212 }, 00:15:31.212 { 00:15:31.212 "method": "sock_impl_set_options", 00:15:31.212 "params": { 00:15:31.212 "enable_ktls": false, 00:15:31.212 "enable_placement_id": 0, 00:15:31.212 "enable_quickack": false, 00:15:31.212 "enable_recv_pipe": true, 00:15:31.212 "enable_zerocopy_send_client": false, 00:15:31.212 "enable_zerocopy_send_server": true, 00:15:31.212 "impl_name": "ssl", 00:15:31.212 "recv_buf_size": 4096, 00:15:31.212 "send_buf_size": 4096, 00:15:31.212 "tls_version": 0, 00:15:31.212 "zerocopy_threshold": 0 00:15:31.212 } 00:15:31.212 }, 00:15:31.212 { 00:15:31.212 "method": "sock_impl_set_options", 00:15:31.212 "params": { 00:15:31.212 "enable_ktls": false, 00:15:31.212 "enable_placement_id": 0, 00:15:31.212 "enable_quickack": false, 00:15:31.212 "enable_recv_pipe": true, 00:15:31.212 "enable_zerocopy_send_client": false, 00:15:31.212 "enable_zerocopy_send_server": true, 00:15:31.212 "impl_name": "posix", 00:15:31.212 "recv_buf_size": 2097152, 00:15:31.212 "send_buf_size": 2097152, 00:15:31.212 "tls_version": 0, 00:15:31.212 "zerocopy_threshold": 0 00:15:31.212 } 00:15:31.212 } 00:15:31.212 ] 00:15:31.212 }, 00:15:31.212 { 00:15:31.212 "subsystem": "vmd", 00:15:31.212 "config": [] 00:15:31.212 }, 00:15:31.212 { 00:15:31.212 "subsystem": "accel", 00:15:31.212 "config": [ 00:15:31.212 { 00:15:31.212 "method": "accel_set_options", 00:15:31.212 "params": { 00:15:31.212 "buf_count": 2048, 00:15:31.212 "large_cache_size": 16, 00:15:31.212 "sequence_count": 2048, 00:15:31.212 "small_cache_size": 128, 00:15:31.212 "task_count": 2048 00:15:31.212 } 00:15:31.212 } 00:15:31.212 ] 00:15:31.212 }, 00:15:31.212 { 00:15:31.212 "subsystem": "bdev", 00:15:31.212 "config": [ 00:15:31.212 { 00:15:31.212 "method": "bdev_set_options", 00:15:31.212 "params": { 00:15:31.212 "bdev_auto_examine": true, 00:15:31.212 "bdev_io_cache_size": 256, 00:15:31.212 "bdev_io_pool_size": 65535, 00:15:31.212 "iobuf_large_cache_size": 16, 00:15:31.212 "iobuf_small_cache_size": 128 00:15:31.212 } 00:15:31.212 }, 00:15:31.212 { 00:15:31.212 "method": "bdev_raid_set_options", 00:15:31.212 "params": { 00:15:31.212 "process_window_size_kb": 1024 00:15:31.212 } 00:15:31.212 }, 00:15:31.212 { 00:15:31.212 "method": "bdev_iscsi_set_options", 00:15:31.212 "params": { 00:15:31.212 "timeout_sec": 30 00:15:31.212 } 00:15:31.212 }, 00:15:31.212 { 00:15:31.212 "method": "bdev_nvme_set_options", 00:15:31.212 "params": { 00:15:31.212 "action_on_timeout": "none", 00:15:31.212 "allow_accel_sequence": false, 00:15:31.212 "arbitration_burst": 0, 00:15:31.212 "bdev_retry_count": 3, 00:15:31.212 "ctrlr_loss_timeout_sec": 0, 00:15:31.212 "delay_cmd_submit": true, 00:15:31.212 "dhchap_dhgroups": [ 00:15:31.212 "null", 00:15:31.212 "ffdhe2048", 00:15:31.212 "ffdhe3072", 00:15:31.212 "ffdhe4096", 00:15:31.212 "ffdhe6144", 00:15:31.212 "ffdhe8192" 00:15:31.212 ], 00:15:31.212 "dhchap_digests": [ 00:15:31.212 "sha256", 00:15:31.212 "sha384", 00:15:31.212 "sha512" 00:15:31.212 ], 00:15:31.212 "disable_auto_failback": false, 00:15:31.212 "fast_io_fail_timeout_sec": 0, 00:15:31.212 "generate_uuids": false, 00:15:31.212 "high_priority_weight": 0, 00:15:31.212 "io_path_stat": false, 00:15:31.212 "io_queue_requests": 0, 00:15:31.212 "keep_alive_timeout_ms": 10000, 00:15:31.212 "low_priority_weight": 0, 00:15:31.212 "medium_priority_weight": 0, 00:15:31.212 "nvme_adminq_poll_period_us": 10000, 00:15:31.212 "nvme_error_stat": false, 00:15:31.212 "nvme_ioq_poll_period_us": 0, 00:15:31.212 "rdma_cm_event_timeout_ms": 0, 00:15:31.212 "rdma_max_cq_size": 0, 00:15:31.212 "rdma_srq_size": 0, 00:15:31.212 "reconnect_delay_sec": 0, 00:15:31.212 "timeout_admin_us": 0, 00:15:31.212 "timeout_us": 0, 00:15:31.212 "transport_ack_timeout": 0, 00:15:31.212 "transport_retry_count": 4, 00:15:31.212 "transport_tos": 0 00:15:31.212 } 00:15:31.212 }, 00:15:31.212 { 00:15:31.212 "method": "bdev_nvme_set_hotplug", 00:15:31.212 "params": { 00:15:31.212 "enable": false, 00:15:31.212 "period_us": 100000 00:15:31.212 } 00:15:31.212 }, 00:15:31.212 { 00:15:31.212 "method": "bdev_malloc_create", 00:15:31.212 "params": { 00:15:31.212 "block_size": 4096, 00:15:31.212 "name": "malloc0", 00:15:31.212 "num_blocks": 8192, 00:15:31.212 "optimal_io_boundary": 0, 00:15:31.212 "physical_block_size": 4096, 00:15:31.212 "uuid": "008108b5-df32-48e8-813c-a6dc3d1e1f1f" 00:15:31.212 } 00:15:31.212 }, 00:15:31.212 { 00:15:31.212 "method": "bdev_wait_for_examine" 00:15:31.212 } 00:15:31.212 ] 00:15:31.212 }, 00:15:31.212 { 00:15:31.212 "subsystem": "nbd", 00:15:31.212 "config": [] 00:15:31.212 }, 00:15:31.212 { 00:15:31.212 "subsystem": "scheduler", 00:15:31.212 "config": [ 00:15:31.212 { 00:15:31.212 "method": "framework_set_scheduler", 00:15:31.212 "params": { 00:15:31.212 "name": "static" 00:15:31.212 } 00:15:31.212 } 00:15:31.212 ] 00:15:31.212 }, 00:15:31.212 { 00:15:31.212 "subsystem": "nvmf", 00:15:31.212 "config": [ 00:15:31.212 { 00:15:31.212 "method": "nvmf_set_config", 00:15:31.212 "params": { 00:15:31.212 "admin_cmd_passthru": { 00:15:31.212 "identify_ctrlr": false 00:15:31.212 }, 00:15:31.212 "discovery_filter": "match_any" 00:15:31.212 } 00:15:31.212 }, 00:15:31.212 { 00:15:31.212 "method": "nvmf_set_max_subsystems", 00:15:31.212 "params": { 00:15:31.212 "max_subsystems": 1024 00:15:31.212 } 00:15:31.212 }, 00:15:31.212 { 00:15:31.212 "method": "nvmf_set_crdt", 00:15:31.212 "params": { 00:15:31.212 "crdt1": 0, 00:15:31.212 "crdt2": 0, 00:15:31.212 "crdt3": 0 00:15:31.212 } 00:15:31.212 }, 00:15:31.212 { 00:15:31.212 "method": "nvmf_create_transport", 00:15:31.212 "params": { 00:15:31.212 "abort_timeout_sec": 1, 00:15:31.212 "ack_timeout": 0, 00:15:31.212 "buf_cache_size": 4294967295, 00:15:31.212 "c2h_success": false, 00:15:31.212 "data_wr_pool_size": 0, 00:15:31.212 "dif_insert_or_strip": false, 00:15:31.212 "in_capsule_data_size": 4096, 00:15:31.212 "io_unit_size": 131072, 00:15:31.212 "max_aq_depth": 128, 00:15:31.212 "max_io_qpairs_per_ctrlr": 127, 00:15:31.212 "max_io_size": 131072, 00:15:31.212 "max_queue_depth": 128, 00:15:31.213 "num_shared_buffers": 511, 00:15:31.213 "sock_priority": 0, 00:15:31.213 "trtype": "TCP", 00:15:31.213 "zcopy": false 00:15:31.213 } 00:15:31.213 }, 00:15:31.213 { 00:15:31.213 "method": "nvmf_create_subsystem", 00:15:31.213 "params": { 00:15:31.213 "allow_any_host": false, 00:15:31.213 "ana_reporting": false, 00:15:31.213 "max_cntlid": 65519, 00:15:31.213 "max_namespaces": 10, 00:15:31.213 "min_cntlid": 1, 00:15:31.213 "model_number": "SPDK bdev Controller", 00:15:31.213 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:31.213 "serial_number": "SPDK00000000000001" 00:15:31.213 } 00:15:31.213 }, 00:15:31.213 { 00:15:31.213 "method": "nvmf_subsystem_add_host", 00:15:31.213 "params": { 00:15:31.213 "host": "nqn.2016-06.io.spdk:host1", 00:15:31.213 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:31.213 "psk": "/tmp/tmp.6tWS54vE6Q" 00:15:31.213 } 00:15:31.213 }, 00:15:31.213 { 00:15:31.213 "method": "nvmf_subsystem_add_ns", 00:15:31.213 "params": { 00:15:31.213 "namespace": { 00:15:31.213 "bdev_name": "malloc0", 00:15:31.213 "nguid": "008108B5DF3248E8813CA6DC3D1E1F1F", 00:15:31.213 "no_auto_visible": false, 00:15:31.213 "nsid": 1, 00:15:31.213 "uuid": "008108b5-df32-48e8-813c-a6dc3d1e1f1f" 00:15:31.213 }, 00:15:31.213 "nqn": "nqn.2016-06.io.spdk:cnode1" 00:15:31.213 } 00:15:31.213 }, 00:15:31.213 { 00:15:31.213 "method": "nvmf_subsystem_add_listener", 00:15:31.213 "params": { 00:15:31.213 "listen_address": { 00:15:31.213 "adrfam": "IPv4", 00:15:31.213 "traddr": "10.0.0.2", 00:15:31.213 "trsvcid": "4420", 00:15:31.213 "trtype": "TCP" 00:15:31.213 }, 00:15:31.213 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:31.213 "secure_channel": true 00:15:31.213 } 00:15:31.213 } 00:15:31.213 ] 00:15:31.213 } 00:15:31.213 ] 00:15:31.213 }' 00:15:31.213 11:34:08 nvmf_tcp.nvmf_tls -- target/tls.sh@197 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:15:31.471 11:34:08 nvmf_tcp.nvmf_tls -- target/tls.sh@197 -- # bdevperfconf='{ 00:15:31.471 "subsystems": [ 00:15:31.471 { 00:15:31.471 "subsystem": "keyring", 00:15:31.471 "config": [] 00:15:31.471 }, 00:15:31.471 { 00:15:31.471 "subsystem": "iobuf", 00:15:31.471 "config": [ 00:15:31.471 { 00:15:31.471 "method": "iobuf_set_options", 00:15:31.471 "params": { 00:15:31.471 "large_bufsize": 135168, 00:15:31.471 "large_pool_count": 1024, 00:15:31.471 "small_bufsize": 8192, 00:15:31.471 "small_pool_count": 8192 00:15:31.471 } 00:15:31.471 } 00:15:31.471 ] 00:15:31.471 }, 00:15:31.471 { 00:15:31.471 "subsystem": "sock", 00:15:31.471 "config": [ 00:15:31.471 { 00:15:31.471 "method": "sock_set_default_impl", 00:15:31.471 "params": { 00:15:31.471 "impl_name": "posix" 00:15:31.471 } 00:15:31.471 }, 00:15:31.471 { 00:15:31.471 "method": "sock_impl_set_options", 00:15:31.471 "params": { 00:15:31.471 "enable_ktls": false, 00:15:31.471 "enable_placement_id": 0, 00:15:31.471 "enable_quickack": false, 00:15:31.471 "enable_recv_pipe": true, 00:15:31.471 "enable_zerocopy_send_client": false, 00:15:31.471 "enable_zerocopy_send_server": true, 00:15:31.471 "impl_name": "ssl", 00:15:31.471 "recv_buf_size": 4096, 00:15:31.471 "send_buf_size": 4096, 00:15:31.471 "tls_version": 0, 00:15:31.471 "zerocopy_threshold": 0 00:15:31.471 } 00:15:31.471 }, 00:15:31.471 { 00:15:31.471 "method": "sock_impl_set_options", 00:15:31.471 "params": { 00:15:31.471 "enable_ktls": false, 00:15:31.471 "enable_placement_id": 0, 00:15:31.471 "enable_quickack": false, 00:15:31.471 "enable_recv_pipe": true, 00:15:31.471 "enable_zerocopy_send_client": false, 00:15:31.471 "enable_zerocopy_send_server": true, 00:15:31.471 "impl_name": "posix", 00:15:31.471 "recv_buf_size": 2097152, 00:15:31.471 "send_buf_size": 2097152, 00:15:31.471 "tls_version": 0, 00:15:31.471 "zerocopy_threshold": 0 00:15:31.471 } 00:15:31.471 } 00:15:31.471 ] 00:15:31.471 }, 00:15:31.471 { 00:15:31.471 "subsystem": "vmd", 00:15:31.471 "config": [] 00:15:31.471 }, 00:15:31.471 { 00:15:31.471 "subsystem": "accel", 00:15:31.471 "config": [ 00:15:31.471 { 00:15:31.471 "method": "accel_set_options", 00:15:31.471 "params": { 00:15:31.471 "buf_count": 2048, 00:15:31.471 "large_cache_size": 16, 00:15:31.471 "sequence_count": 2048, 00:15:31.471 "small_cache_size": 128, 00:15:31.471 "task_count": 2048 00:15:31.471 } 00:15:31.471 } 00:15:31.471 ] 00:15:31.471 }, 00:15:31.471 { 00:15:31.471 "subsystem": "bdev", 00:15:31.471 "config": [ 00:15:31.471 { 00:15:31.471 "method": "bdev_set_options", 00:15:31.471 "params": { 00:15:31.471 "bdev_auto_examine": true, 00:15:31.471 "bdev_io_cache_size": 256, 00:15:31.471 "bdev_io_pool_size": 65535, 00:15:31.471 "iobuf_large_cache_size": 16, 00:15:31.471 "iobuf_small_cache_size": 128 00:15:31.471 } 00:15:31.471 }, 00:15:31.471 { 00:15:31.471 "method": "bdev_raid_set_options", 00:15:31.471 "params": { 00:15:31.471 "process_window_size_kb": 1024 00:15:31.471 } 00:15:31.471 }, 00:15:31.471 { 00:15:31.471 "method": "bdev_iscsi_set_options", 00:15:31.471 "params": { 00:15:31.471 "timeout_sec": 30 00:15:31.471 } 00:15:31.471 }, 00:15:31.471 { 00:15:31.471 "method": "bdev_nvme_set_options", 00:15:31.471 "params": { 00:15:31.471 "action_on_timeout": "none", 00:15:31.471 "allow_accel_sequence": false, 00:15:31.471 "arbitration_burst": 0, 00:15:31.471 "bdev_retry_count": 3, 00:15:31.471 "ctrlr_loss_timeout_sec": 0, 00:15:31.471 "delay_cmd_submit": true, 00:15:31.472 "dhchap_dhgroups": [ 00:15:31.472 "null", 00:15:31.472 "ffdhe2048", 00:15:31.472 "ffdhe3072", 00:15:31.472 "ffdhe4096", 00:15:31.472 "ffdhe6144", 00:15:31.472 "ffdhe8192" 00:15:31.472 ], 00:15:31.472 "dhchap_digests": [ 00:15:31.472 "sha256", 00:15:31.472 "sha384", 00:15:31.472 "sha512" 00:15:31.472 ], 00:15:31.472 "disable_auto_failback": false, 00:15:31.472 "fast_io_fail_timeout_sec": 0, 00:15:31.472 "generate_uuids": false, 00:15:31.472 "high_priority_weight": 0, 00:15:31.472 "io_path_stat": false, 00:15:31.472 "io_queue_requests": 512, 00:15:31.472 "keep_alive_timeout_ms": 10000, 00:15:31.472 "low_priority_weight": 0, 00:15:31.472 "medium_priority_weight": 0, 00:15:31.472 "nvme_adminq_poll_period_us": 10000, 00:15:31.472 "nvme_error_stat": false, 00:15:31.472 "nvme_ioq_poll_period_us": 0, 00:15:31.472 "rdma_cm_event_timeout_ms": 0, 00:15:31.472 "rdma_max_cq_size": 0, 00:15:31.472 "rdma_srq_size": 0, 00:15:31.472 "reconnect_delay_sec": 0, 00:15:31.472 "timeout_admin_us": 0, 00:15:31.472 "timeout_us": 0, 00:15:31.472 "transport_ack_timeout": 0, 00:15:31.472 "transport_retry_count": 4, 00:15:31.472 "transport_tos": 0 00:15:31.472 } 00:15:31.472 }, 00:15:31.472 { 00:15:31.472 "method": "bdev_nvme_attach_controller", 00:15:31.472 "params": { 00:15:31.472 "adrfam": "IPv4", 00:15:31.472 "ctrlr_loss_timeout_sec": 0, 00:15:31.472 "ddgst": false, 00:15:31.472 "fast_io_fail_timeout_sec": 0, 00:15:31.472 "hdgst": false, 00:15:31.472 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:31.472 "name": "TLSTEST", 00:15:31.472 "prchk_guard": false, 00:15:31.472 "prchk_reftag": false, 00:15:31.472 "psk": "/tmp/tmp.6tWS54vE6Q", 00:15:31.472 "reconnect_delay_sec": 0, 00:15:31.472 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:31.472 "traddr": "10.0.0.2", 00:15:31.472 "trsvcid": "4420", 00:15:31.472 "trtype": "TCP" 00:15:31.472 } 00:15:31.472 }, 00:15:31.472 { 00:15:31.472 "method": "bdev_nvme_set_hotplug", 00:15:31.472 "params": { 00:15:31.472 "enable": false, 00:15:31.472 "period_us": 100000 00:15:31.472 } 00:15:31.472 }, 00:15:31.472 { 00:15:31.472 "method": "bdev_wait_for_examine" 00:15:31.472 } 00:15:31.472 ] 00:15:31.472 }, 00:15:31.472 { 00:15:31.472 "subsystem": "nbd", 00:15:31.472 "config": [] 00:15:31.472 } 00:15:31.472 ] 00:15:31.472 }' 00:15:31.472 11:34:08 nvmf_tcp.nvmf_tls -- target/tls.sh@199 -- # killprocess 84643 00:15:31.472 11:34:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 84643 ']' 00:15:31.472 11:34:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 84643 00:15:31.472 11:34:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:15:31.472 11:34:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:31.472 11:34:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 84643 00:15:31.472 11:34:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:15:31.472 11:34:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:15:31.472 killing process with pid 84643 00:15:31.472 11:34:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 84643' 00:15:31.472 11:34:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 84643 00:15:31.472 Received shutdown signal, test time was about 10.000000 seconds 00:15:31.472 00:15:31.472 Latency(us) 00:15:31.472 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:31.472 =================================================================================================================== 00:15:31.472 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:15:31.472 [2024-07-15 11:34:08.941359] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:15:31.472 11:34:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 84643 00:15:31.731 11:34:09 nvmf_tcp.nvmf_tls -- target/tls.sh@200 -- # killprocess 84557 00:15:31.731 11:34:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 84557 ']' 00:15:31.731 11:34:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 84557 00:15:31.731 11:34:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:15:31.731 11:34:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:31.731 11:34:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 84557 00:15:31.731 11:34:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:15:31.731 11:34:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:15:31.731 killing process with pid 84557 00:15:31.731 11:34:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 84557' 00:15:31.731 11:34:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 84557 00:15:31.731 [2024-07-15 11:34:09.129900] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:15:31.731 11:34:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 84557 00:15:31.990 11:34:09 nvmf_tcp.nvmf_tls -- target/tls.sh@203 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:15:31.990 11:34:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:31.990 11:34:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:31.990 11:34:09 nvmf_tcp.nvmf_tls -- target/tls.sh@203 -- # echo '{ 00:15:31.990 "subsystems": [ 00:15:31.990 { 00:15:31.990 "subsystem": "keyring", 00:15:31.990 "config": [] 00:15:31.990 }, 00:15:31.990 { 00:15:31.990 "subsystem": "iobuf", 00:15:31.990 "config": [ 00:15:31.990 { 00:15:31.990 "method": "iobuf_set_options", 00:15:31.990 "params": { 00:15:31.990 "large_bufsize": 135168, 00:15:31.990 "large_pool_count": 1024, 00:15:31.990 "small_bufsize": 8192, 00:15:31.990 "small_pool_count": 8192 00:15:31.990 } 00:15:31.990 } 00:15:31.990 ] 00:15:31.990 }, 00:15:31.990 { 00:15:31.990 "subsystem": "sock", 00:15:31.990 "config": [ 00:15:31.990 { 00:15:31.990 "method": "sock_set_default_impl", 00:15:31.990 "params": { 00:15:31.990 "impl_name": "posix" 00:15:31.990 } 00:15:31.990 }, 00:15:31.990 { 00:15:31.990 "method": "sock_impl_set_options", 00:15:31.990 "params": { 00:15:31.990 "enable_ktls": false, 00:15:31.990 "enable_placement_id": 0, 00:15:31.990 "enable_quickack": false, 00:15:31.990 "enable_recv_pipe": true, 00:15:31.990 "enable_zerocopy_send_client": false, 00:15:31.990 "enable_zerocopy_send_server": true, 00:15:31.990 "impl_name": "ssl", 00:15:31.990 "recv_buf_size": 4096, 00:15:31.990 "send_buf_size": 4096, 00:15:31.990 "tls_version": 0, 00:15:31.990 "zerocopy_threshold": 0 00:15:31.990 } 00:15:31.990 }, 00:15:31.990 { 00:15:31.990 "method": "sock_impl_set_options", 00:15:31.990 "params": { 00:15:31.990 "enable_ktls": false, 00:15:31.990 "enable_placement_id": 0, 00:15:31.990 "enable_quickack": false, 00:15:31.990 "enable_recv_pipe": true, 00:15:31.990 "enable_zerocopy_send_client": false, 00:15:31.990 "enable_zerocopy_send_server": true, 00:15:31.990 "impl_name": "posix", 00:15:31.990 "recv_buf_size": 2097152, 00:15:31.990 "send_buf_size": 2097152, 00:15:31.990 "tls_version": 0, 00:15:31.990 "zerocopy_threshold": 0 00:15:31.990 } 00:15:31.990 } 00:15:31.990 ] 00:15:31.990 }, 00:15:31.990 { 00:15:31.990 "subsystem": "vmd", 00:15:31.990 "config": [] 00:15:31.990 }, 00:15:31.990 { 00:15:31.990 "subsystem": "accel", 00:15:31.990 "config": [ 00:15:31.990 { 00:15:31.990 "method": "accel_set_options", 00:15:31.990 "params": { 00:15:31.990 "buf_count": 2048, 00:15:31.990 "large_cache_size": 16, 00:15:31.990 "sequence_count": 2048, 00:15:31.990 "small_cache_size": 128, 00:15:31.990 "task_count": 2048 00:15:31.990 } 00:15:31.990 } 00:15:31.990 ] 00:15:31.990 }, 00:15:31.990 { 00:15:31.990 "subsystem": "bdev", 00:15:31.990 "config": [ 00:15:31.990 { 00:15:31.990 "method": "bdev_set_options", 00:15:31.990 "params": { 00:15:31.990 "bdev_auto_examine": true, 00:15:31.990 "bdev_io_cache_size": 256, 00:15:31.990 "bdev_io_pool_size": 65535, 00:15:31.990 "iobuf_large_cache_size": 16, 00:15:31.990 "iobuf_small_cache_size": 128 00:15:31.990 } 00:15:31.990 }, 00:15:31.990 { 00:15:31.990 "method": "bdev_raid_set_options", 00:15:31.990 "params": { 00:15:31.990 "process_window_size_kb": 1024 00:15:31.990 } 00:15:31.990 }, 00:15:31.990 { 00:15:31.990 "method": "bdev_iscsi_set_options", 00:15:31.990 "params": { 00:15:31.990 "timeout_sec": 30 00:15:31.990 } 00:15:31.990 }, 00:15:31.990 { 00:15:31.990 "method": "bdev_nvme_set_options", 00:15:31.990 "params": { 00:15:31.990 "action_on_timeout": "none", 00:15:31.990 "allow_accel_sequence": false, 00:15:31.990 "arbitration_burst": 0, 00:15:31.990 "bdev_retry_count": 3, 00:15:31.990 "ctrlr_loss_timeout_sec": 0, 00:15:31.990 "delay_cmd_submit": true, 00:15:31.990 "dhchap_dhgroups": [ 00:15:31.990 "null", 00:15:31.990 "ffdhe2048", 00:15:31.990 "ffdhe3072", 00:15:31.990 "ffdhe4096", 00:15:31.990 "ffdhe6144", 00:15:31.990 "ffdhe8192" 00:15:31.990 ], 00:15:31.990 "dhchap_digests": [ 00:15:31.990 "sha256", 00:15:31.990 "sha384", 00:15:31.990 "sha512" 00:15:31.990 ], 00:15:31.990 "disable_auto_failback": false, 00:15:31.990 "fast_io_fail_timeout_sec": 0, 00:15:31.990 "generate_uuids": false, 00:15:31.990 "high_priority_weight": 0, 00:15:31.990 "io_path_stat": false, 00:15:31.990 "io_queue_requests": 0, 00:15:31.990 "keep_alive_timeout_ms": 10000, 00:15:31.990 "low_priority_weight": 0, 00:15:31.990 "medium_priority_weight": 0, 00:15:31.990 "nvme_adminq_poll_period_us": 10000, 00:15:31.990 "nvme_error_stat": false, 00:15:31.990 "nvme_ioq_poll_period_us": 0, 00:15:31.990 "rdma_cm_event_timeout_ms": 0, 00:15:31.990 "rdma_max_cq_size": 0, 00:15:31.990 "rdma_srq_size": 0, 00:15:31.990 "reconnect_delay_sec": 0, 00:15:31.990 "timeout_admin_us": 0, 00:15:31.990 "timeout_us": 0, 00:15:31.990 "transport_ack_timeout": 0, 00:15:31.990 "transport_retry_count": 4, 00:15:31.990 "transport_tos": 0 00:15:31.990 } 00:15:31.990 }, 00:15:31.990 { 00:15:31.990 "method": "bdev_nvme_set_hotplug", 00:15:31.990 "params": { 00:15:31.990 "enable": false, 00:15:31.990 "period_us": 100000 00:15:31.990 } 00:15:31.990 }, 00:15:31.990 { 00:15:31.990 "method": "bdev_malloc_create", 00:15:31.990 "params": { 00:15:31.990 "block_size": 4096, 00:15:31.990 "name": "malloc0", 00:15:31.990 "num_blocks": 8192, 00:15:31.990 "optimal_io_boundary": 0, 00:15:31.990 "physical_block_size": 4096, 00:15:31.990 "uuid": "008108b5-df32-48e8-813c-a6dc3d1e1f1f" 00:15:31.990 } 00:15:31.990 }, 00:15:31.990 { 00:15:31.990 "method": "bdev_wait_for_examine" 00:15:31.990 } 00:15:31.990 ] 00:15:31.990 }, 00:15:31.990 { 00:15:31.990 "subsystem": "nbd", 00:15:31.990 "config": [] 00:15:31.990 }, 00:15:31.990 { 00:15:31.990 "subsystem": "scheduler", 00:15:31.990 "config": [ 00:15:31.990 { 00:15:31.990 "method": "framework_set_scheduler", 00:15:31.990 "params": { 00:15:31.990 "name": "static" 00:15:31.990 } 00:15:31.990 } 00:15:31.990 ] 00:15:31.990 }, 00:15:31.990 { 00:15:31.990 "subsystem": "nvmf", 00:15:31.990 "config": [ 00:15:31.990 { 00:15:31.990 "method": "nvmf_set_config", 00:15:31.990 "params": { 00:15:31.990 "admin_cmd_passthru": { 00:15:31.990 "identify_ctrlr": false 00:15:31.990 }, 00:15:31.990 "discovery_filter": "match_any" 00:15:31.990 } 00:15:31.990 }, 00:15:31.990 { 00:15:31.990 "method": "nvmf_set_max_subsystems", 00:15:31.990 "params": { 00:15:31.990 "max_subsystems": 1024 00:15:31.990 } 00:15:31.990 }, 00:15:31.990 { 00:15:31.990 "method": "nvmf_set_crdt", 00:15:31.990 "params": { 00:15:31.990 "crdt1": 0, 00:15:31.990 "crdt2": 0, 00:15:31.990 "crdt3": 0 00:15:31.990 } 00:15:31.990 }, 00:15:31.990 { 00:15:31.990 "method": "nvmf_create_transport", 00:15:31.990 "params": { 00:15:31.990 "abort_timeout_sec": 1, 00:15:31.990 "ack_timeout": 0, 00:15:31.990 "buf_cache_size": 4294967295, 00:15:31.990 "c2h_success": false, 00:15:31.990 "data_wr_pool_size": 0, 00:15:31.990 "dif_insert_or_strip": false, 00:15:31.990 "in_capsule_data_size": 4096, 00:15:31.990 "io_unit_size": 131072, 00:15:31.990 "max_aq_depth": 128, 00:15:31.990 "max_io_qpairs_per_ctrlr": 127, 00:15:31.990 "max_io_size": 131072, 00:15:31.990 "max_queue_depth": 128, 00:15:31.990 "num_shared_buffers": 511, 00:15:31.990 "sock_priority": 0, 00:15:31.990 "trtype": "TCP", 00:15:31.990 "zcopy": false 00:15:31.990 } 00:15:31.990 }, 00:15:31.990 { 00:15:31.990 "method": "nvmf_create_subsystem", 00:15:31.991 "params": { 00:15:31.991 "allow_any_host": false, 00:15:31.991 "ana_reporting": false, 00:15:31.991 "max_cntlid": 65519, 00:15:31.991 "max_namespaces": 10, 00:15:31.991 "min_cntlid": 1, 00:15:31.991 "model_number": "SPDK bdev Controller", 00:15:31.991 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:31.991 "serial_number": "SPDK00000000000001" 00:15:31.991 } 00:15:31.991 }, 00:15:31.991 { 00:15:31.991 "method": "nvmf_subsystem_add_host", 00:15:31.991 "params": { 00:15:31.991 "host": "nqn.2016-06.io.spdk:host1", 00:15:31.991 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:31.991 "psk": "/tmp/tmp.6tWS54vE6Q" 00:15:31.991 } 00:15:31.991 }, 00:15:31.991 { 00:15:31.991 "method": "nvmf_subsystem_add_ns", 00:15:31.991 "params": { 00:15:31.991 "namespace": { 00:15:31.991 "bdev_name": "malloc0", 00:15:31.991 "nguid": "008108B5DF3248E8813CA6DC3D1E1F1F", 00:15:31.991 "no_auto_visible": false, 00:15:31.991 "nsid": 1, 00:15:31.991 "uuid": "008108b5-df32-48e8-813c-a6dc3d1e1f1f" 00:15:31.991 }, 00:15:31.991 "nqn": "nqn.2016-06.io.spdk:cnode1" 00:15:31.991 } 00:15:31.991 }, 00:15:31.991 { 00:15:31.991 "method": "nvmf_subsystem_add_listener", 00:15:31.991 "params": { 00:15:31.991 "listen_address": { 00:15:31.991 "adrfam": "IPv4", 00:15:31.991 "traddr": "10.0.0.2", 00:15:31.991 "trsvcid": "4420", 00:15:31.991 "trtype": "TCP" 00:15:31.991 }, 00:15:31.991 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:31.991 "secure_channel": true 00:15:31.991 } 00:15:31.991 } 00:15:31.991 ] 00:15:31.991 } 00:15:31.991 ] 00:15:31.991 }' 00:15:31.991 11:34:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:31.991 11:34:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=84708 00:15:31.991 11:34:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:15:31.991 11:34:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 84708 00:15:31.991 11:34:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 84708 ']' 00:15:31.991 11:34:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:31.991 11:34:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:31.991 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:31.991 11:34:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:31.991 11:34:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:31.991 11:34:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:31.991 [2024-07-15 11:34:09.374061] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:15:31.991 [2024-07-15 11:34:09.374161] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:32.249 [2024-07-15 11:34:09.509805] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:32.249 [2024-07-15 11:34:09.568992] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:32.249 [2024-07-15 11:34:09.569054] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:32.249 [2024-07-15 11:34:09.569067] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:32.249 [2024-07-15 11:34:09.569076] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:32.249 [2024-07-15 11:34:09.569083] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:32.249 [2024-07-15 11:34:09.569183] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:32.531 [2024-07-15 11:34:09.754514] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:32.531 [2024-07-15 11:34:09.770442] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:15:32.531 [2024-07-15 11:34:09.786433] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:15:32.531 [2024-07-15 11:34:09.786682] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:33.096 11:34:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:33.097 11:34:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:15:33.097 11:34:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:33.097 11:34:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:33.097 11:34:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:33.097 11:34:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:33.097 11:34:10 nvmf_tcp.nvmf_tls -- target/tls.sh@207 -- # bdevperf_pid=84752 00:15:33.097 11:34:10 nvmf_tcp.nvmf_tls -- target/tls.sh@208 -- # waitforlisten 84752 /var/tmp/bdevperf.sock 00:15:33.097 11:34:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 84752 ']' 00:15:33.097 11:34:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:33.097 11:34:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:33.097 11:34:10 nvmf_tcp.nvmf_tls -- target/tls.sh@204 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:15:33.097 11:34:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:33.097 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:33.097 11:34:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:33.097 11:34:10 nvmf_tcp.nvmf_tls -- target/tls.sh@204 -- # echo '{ 00:15:33.097 "subsystems": [ 00:15:33.097 { 00:15:33.097 "subsystem": "keyring", 00:15:33.097 "config": [] 00:15:33.097 }, 00:15:33.097 { 00:15:33.097 "subsystem": "iobuf", 00:15:33.097 "config": [ 00:15:33.097 { 00:15:33.097 "method": "iobuf_set_options", 00:15:33.097 "params": { 00:15:33.097 "large_bufsize": 135168, 00:15:33.097 "large_pool_count": 1024, 00:15:33.097 "small_bufsize": 8192, 00:15:33.097 "small_pool_count": 8192 00:15:33.097 } 00:15:33.097 } 00:15:33.097 ] 00:15:33.097 }, 00:15:33.097 { 00:15:33.097 "subsystem": "sock", 00:15:33.097 "config": [ 00:15:33.097 { 00:15:33.097 "method": "sock_set_default_impl", 00:15:33.097 "params": { 00:15:33.097 "impl_name": "posix" 00:15:33.097 } 00:15:33.097 }, 00:15:33.097 { 00:15:33.097 "method": "sock_impl_set_options", 00:15:33.097 "params": { 00:15:33.097 "enable_ktls": false, 00:15:33.097 "enable_placement_id": 0, 00:15:33.097 "enable_quickack": false, 00:15:33.097 "enable_recv_pipe": true, 00:15:33.097 "enable_zerocopy_send_client": false, 00:15:33.097 "enable_zerocopy_send_server": true, 00:15:33.097 "impl_name": "ssl", 00:15:33.097 "recv_buf_size": 4096, 00:15:33.097 "send_buf_size": 4096, 00:15:33.097 "tls_version": 0, 00:15:33.097 "zerocopy_threshold": 0 00:15:33.097 } 00:15:33.097 }, 00:15:33.097 { 00:15:33.097 "method": "sock_impl_set_options", 00:15:33.097 "params": { 00:15:33.097 "enable_ktls": false, 00:15:33.097 "enable_placement_id": 0, 00:15:33.097 "enable_quickack": false, 00:15:33.097 "enable_recv_pipe": true, 00:15:33.097 "enable_zerocopy_send_client": false, 00:15:33.097 "enable_zerocopy_send_server": true, 00:15:33.097 "impl_name": "posix", 00:15:33.097 "recv_buf_size": 2097152, 00:15:33.097 "send_buf_size": 2097152, 00:15:33.097 "tls_version": 0, 00:15:33.097 "zerocopy_threshold": 0 00:15:33.097 } 00:15:33.097 } 00:15:33.097 ] 00:15:33.097 }, 00:15:33.097 { 00:15:33.097 "subsystem": "vmd", 00:15:33.097 "config": [] 00:15:33.097 }, 00:15:33.097 { 00:15:33.097 "subsystem": "accel", 00:15:33.097 "config": [ 00:15:33.097 { 00:15:33.097 "method": "accel_set_options", 00:15:33.097 "params": { 00:15:33.097 "buf_count": 2048, 00:15:33.097 "large_cache_size": 16, 00:15:33.097 "sequence_count": 2048, 00:15:33.097 "small_cache_size": 128, 00:15:33.097 "task_count": 2048 00:15:33.097 } 00:15:33.097 } 00:15:33.097 ] 00:15:33.097 }, 00:15:33.097 { 00:15:33.097 "subsystem": "bdev", 00:15:33.097 "config": [ 00:15:33.097 { 00:15:33.097 "method": "bdev_set_options", 00:15:33.097 "params": { 00:15:33.097 "bdev_auto_examine": true, 00:15:33.097 "bdev_io_cache_size": 256, 00:15:33.097 "bdev_io_pool_size": 65535, 00:15:33.097 "iobuf_large_cache_size": 16, 00:15:33.097 "iobuf_small_cache_size": 128 00:15:33.097 } 00:15:33.097 }, 00:15:33.097 { 00:15:33.097 "method": "bdev_raid_set_options", 00:15:33.097 "params": { 00:15:33.097 "process_window_size_kb": 1024 00:15:33.097 } 00:15:33.097 }, 00:15:33.097 { 00:15:33.097 "method": "bdev_iscsi_set_options", 00:15:33.097 "params": { 00:15:33.097 "timeout_sec": 30 00:15:33.097 } 00:15:33.097 }, 00:15:33.097 { 00:15:33.097 "method": "bdev_nvme_set_options", 00:15:33.097 "params": { 00:15:33.097 "action_on_timeout": "none", 00:15:33.097 "allow_accel_sequence": false, 00:15:33.097 "arbitration_burst": 0, 00:15:33.097 "bdev_retry_count": 3, 00:15:33.097 "ctrlr_loss_timeout_sec": 0, 00:15:33.097 "delay_cmd_submit": true, 00:15:33.097 "dhchap_dhgroups": [ 00:15:33.097 "null", 00:15:33.097 "ffdhe2048", 00:15:33.097 "ffdhe3072", 00:15:33.097 "ffdhe4096", 00:15:33.097 "ffdhe6144", 00:15:33.097 "ffdhe8192" 00:15:33.097 ], 00:15:33.097 "dhchap_digests": [ 00:15:33.097 "sha256", 00:15:33.097 "sha384", 00:15:33.097 "sha512" 00:15:33.097 ], 00:15:33.097 "disable_auto_failback": false, 00:15:33.097 "fast_io_fail_timeout_sec": 0, 00:15:33.097 "generate_uuids": false, 00:15:33.097 "high_priority_weight": 0, 00:15:33.097 "io_path_stat": false, 00:15:33.097 "io_queue_requests": 512, 00:15:33.097 "keep_alive_timeout_ms": 10000, 00:15:33.097 "low_priority_weight": 0, 00:15:33.097 "medium_priority_weight": 0, 00:15:33.097 "nvme_adminq_poll_period_us": 10000, 00:15:33.097 "nvme_error_stat": false, 00:15:33.097 "nvme_ioq_poll_period_us": 0, 00:15:33.097 "rdma_cm_event_timeout_ms": 0, 00:15:33.097 "rdma_max_cq_size": 0, 00:15:33.097 "rdma_srq_size": 0, 00:15:33.097 "reconnect_delay_sec": 0, 00:15:33.097 "timeout_admin_us": 0, 00:15:33.097 "timeout_us": 0, 00:15:33.097 "transport_ack_timeout": 0, 00:15:33.097 "transport_retry_count": 4, 00:15:33.097 "transport_tos": 0 00:15:33.097 } 00:15:33.097 }, 00:15:33.097 { 00:15:33.097 "method": "bdev_nvme_attach_controller", 00:15:33.097 "params": { 00:15:33.097 "adrfam": "IPv4", 00:15:33.097 "ctrlr_loss_timeout_sec": 0, 00:15:33.097 "ddgst": false, 00:15:33.097 "fast_io_fail_timeout_sec": 0, 00:15:33.097 "hdgst": false, 00:15:33.097 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:33.097 "name": "TLSTEST", 00:15:33.097 "prchk_guard": false, 00:15:33.097 "prchk_reftag": false, 00:15:33.097 "psk": "/tmp/tmp.6tWS54vE6Q", 00:15:33.097 "reconnect_delay_sec": 0, 00:15:33.097 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:33.097 "traddr": "10.0.0.2", 00:15:33.097 "trsvcid": "4420", 00:15:33.097 "trtype": "TCP" 00:15:33.097 } 00:15:33.097 }, 00:15:33.097 { 00:15:33.097 "method": "bdev_nvme_set_hotplug", 00:15:33.097 "params": { 00:15:33.097 "enable": false, 00:15:33.097 "period_us": 100000 00:15:33.097 } 00:15:33.097 }, 00:15:33.097 { 00:15:33.097 "method": "bdev_wait_for_examine" 00:15:33.097 } 00:15:33.097 ] 00:15:33.097 }, 00:15:33.097 { 00:15:33.097 "subsystem": "nbd", 00:15:33.097 "config": [] 00:15:33.097 } 00:15:33.097 ] 00:15:33.097 }' 00:15:33.097 11:34:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:33.097 [2024-07-15 11:34:10.448702] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:15:33.097 [2024-07-15 11:34:10.448834] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84752 ] 00:15:33.355 [2024-07-15 11:34:10.584099] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:33.355 [2024-07-15 11:34:10.644045] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:15:33.355 [2024-07-15 11:34:10.768672] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:15:33.356 [2024-07-15 11:34:10.768831] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:15:34.289 11:34:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:34.289 11:34:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:15:34.289 11:34:11 nvmf_tcp.nvmf_tls -- target/tls.sh@211 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:15:34.289 Running I/O for 10 seconds... 00:15:46.531 00:15:46.531 Latency(us) 00:15:46.531 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:46.531 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:15:46.531 Verification LBA range: start 0x0 length 0x2000 00:15:46.531 TLSTESTn1 : 10.03 3589.63 14.02 0.00 0.00 35580.97 7864.32 33363.78 00:15:46.531 =================================================================================================================== 00:15:46.531 Total : 3589.63 14.02 0.00 0.00 35580.97 7864.32 33363.78 00:15:46.531 0 00:15:46.531 11:34:21 nvmf_tcp.nvmf_tls -- target/tls.sh@213 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:15:46.531 11:34:21 nvmf_tcp.nvmf_tls -- target/tls.sh@214 -- # killprocess 84752 00:15:46.531 11:34:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 84752 ']' 00:15:46.531 11:34:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 84752 00:15:46.531 11:34:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:15:46.531 11:34:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:46.531 11:34:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 84752 00:15:46.531 11:34:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:15:46.531 11:34:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:15:46.531 killing process with pid 84752 00:15:46.531 11:34:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 84752' 00:15:46.531 11:34:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 84752 00:15:46.531 Received shutdown signal, test time was about 10.000000 seconds 00:15:46.531 00:15:46.531 Latency(us) 00:15:46.531 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:46.531 =================================================================================================================== 00:15:46.531 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:46.531 [2024-07-15 11:34:21.782582] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:15:46.531 11:34:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 84752 00:15:46.531 11:34:21 nvmf_tcp.nvmf_tls -- target/tls.sh@215 -- # killprocess 84708 00:15:46.531 11:34:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 84708 ']' 00:15:46.531 11:34:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 84708 00:15:46.531 11:34:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:15:46.531 11:34:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:46.531 11:34:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 84708 00:15:46.531 11:34:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:15:46.531 11:34:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:15:46.531 11:34:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 84708' 00:15:46.531 killing process with pid 84708 00:15:46.531 11:34:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 84708 00:15:46.531 [2024-07-15 11:34:21.979119] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:15:46.531 11:34:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 84708 00:15:46.531 11:34:22 nvmf_tcp.nvmf_tls -- target/tls.sh@218 -- # nvmfappstart 00:15:46.531 11:34:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:46.531 11:34:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:46.531 11:34:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:46.531 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:46.531 11:34:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=84904 00:15:46.531 11:34:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 84904 00:15:46.531 11:34:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:15:46.531 11:34:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 84904 ']' 00:15:46.531 11:34:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:46.531 11:34:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:46.531 11:34:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:46.531 11:34:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:46.531 11:34:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:46.531 [2024-07-15 11:34:22.234649] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:15:46.531 [2024-07-15 11:34:22.234785] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:46.531 [2024-07-15 11:34:22.379363] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:46.531 [2024-07-15 11:34:22.466370] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:46.531 [2024-07-15 11:34:22.466447] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:46.531 [2024-07-15 11:34:22.466464] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:46.531 [2024-07-15 11:34:22.466476] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:46.531 [2024-07-15 11:34:22.466488] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:46.531 [2024-07-15 11:34:22.466523] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:46.531 11:34:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:46.531 11:34:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:15:46.531 11:34:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:46.531 11:34:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:46.531 11:34:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:46.531 11:34:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:46.531 11:34:23 nvmf_tcp.nvmf_tls -- target/tls.sh@219 -- # setup_nvmf_tgt /tmp/tmp.6tWS54vE6Q 00:15:46.531 11:34:23 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.6tWS54vE6Q 00:15:46.531 11:34:23 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:15:46.531 [2024-07-15 11:34:23.541428] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:46.531 11:34:23 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:15:46.531 11:34:23 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:15:46.789 [2024-07-15 11:34:24.049497] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:15:46.789 [2024-07-15 11:34:24.049753] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:46.789 11:34:24 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:15:47.047 malloc0 00:15:47.047 11:34:24 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:15:47.612 11:34:24 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.6tWS54vE6Q 00:15:47.612 [2024-07-15 11:34:25.070148] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:15:47.870 11:34:25 nvmf_tcp.nvmf_tls -- target/tls.sh@222 -- # bdevperf_pid=85008 00:15:47.870 11:34:25 nvmf_tcp.nvmf_tls -- target/tls.sh@220 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:15:47.870 11:34:25 nvmf_tcp.nvmf_tls -- target/tls.sh@224 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:15:47.870 11:34:25 nvmf_tcp.nvmf_tls -- target/tls.sh@225 -- # waitforlisten 85008 /var/tmp/bdevperf.sock 00:15:47.870 11:34:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 85008 ']' 00:15:47.870 11:34:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:47.870 11:34:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:47.870 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:47.870 11:34:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:47.870 11:34:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:47.870 11:34:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:47.870 [2024-07-15 11:34:25.138532] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:15:47.870 [2024-07-15 11:34:25.138651] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85008 ] 00:15:47.870 [2024-07-15 11:34:25.273514] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:47.870 [2024-07-15 11:34:25.333339] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:48.128 11:34:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:48.128 11:34:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:15:48.128 11:34:25 nvmf_tcp.nvmf_tls -- target/tls.sh@227 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.6tWS54vE6Q 00:15:48.694 11:34:25 nvmf_tcp.nvmf_tls -- target/tls.sh@228 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:15:48.952 [2024-07-15 11:34:26.245746] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:15:48.952 nvme0n1 00:15:48.952 11:34:26 nvmf_tcp.nvmf_tls -- target/tls.sh@232 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:15:49.210 Running I/O for 1 seconds... 00:15:50.145 00:15:50.145 Latency(us) 00:15:50.145 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:50.145 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:15:50.145 Verification LBA range: start 0x0 length 0x2000 00:15:50.145 nvme0n1 : 1.02 3519.15 13.75 0.00 0.00 36011.94 7328.12 28359.21 00:15:50.145 =================================================================================================================== 00:15:50.145 Total : 3519.15 13.75 0.00 0.00 36011.94 7328.12 28359.21 00:15:50.145 0 00:15:50.145 11:34:27 nvmf_tcp.nvmf_tls -- target/tls.sh@234 -- # killprocess 85008 00:15:50.145 11:34:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 85008 ']' 00:15:50.145 11:34:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 85008 00:15:50.145 11:34:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:15:50.145 11:34:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:50.145 11:34:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 85008 00:15:50.145 11:34:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:15:50.145 killing process with pid 85008 00:15:50.145 Received shutdown signal, test time was about 1.000000 seconds 00:15:50.145 00:15:50.145 Latency(us) 00:15:50.145 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:50.145 =================================================================================================================== 00:15:50.145 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:50.145 11:34:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:15:50.145 11:34:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 85008' 00:15:50.145 11:34:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 85008 00:15:50.145 11:34:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 85008 00:15:50.403 11:34:27 nvmf_tcp.nvmf_tls -- target/tls.sh@235 -- # killprocess 84904 00:15:50.403 11:34:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 84904 ']' 00:15:50.403 11:34:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 84904 00:15:50.403 11:34:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:15:50.403 11:34:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:50.403 11:34:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 84904 00:15:50.403 11:34:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:15:50.403 11:34:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:15:50.403 killing process with pid 84904 00:15:50.403 11:34:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 84904' 00:15:50.403 11:34:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 84904 00:15:50.403 [2024-07-15 11:34:27.755810] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:15:50.403 11:34:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 84904 00:15:50.660 11:34:27 nvmf_tcp.nvmf_tls -- target/tls.sh@238 -- # nvmfappstart 00:15:50.660 11:34:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:50.660 11:34:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:50.660 11:34:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:50.660 11:34:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=85071 00:15:50.660 11:34:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:15:50.660 11:34:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 85071 00:15:50.660 11:34:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 85071 ']' 00:15:50.660 11:34:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:50.660 11:34:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:50.660 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:50.660 11:34:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:50.660 11:34:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:50.660 11:34:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:50.660 [2024-07-15 11:34:27.999530] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:15:50.660 [2024-07-15 11:34:27.999679] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:50.917 [2024-07-15 11:34:28.139378] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:50.917 [2024-07-15 11:34:28.225093] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:50.918 [2024-07-15 11:34:28.225172] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:50.918 [2024-07-15 11:34:28.225192] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:50.918 [2024-07-15 11:34:28.225205] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:50.918 [2024-07-15 11:34:28.225217] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:50.918 [2024-07-15 11:34:28.225261] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:51.579 11:34:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:51.579 11:34:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:15:51.579 11:34:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:51.579 11:34:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:51.579 11:34:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:51.579 11:34:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:51.579 11:34:28 nvmf_tcp.nvmf_tls -- target/tls.sh@239 -- # rpc_cmd 00:15:51.579 11:34:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:51.579 11:34:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:51.579 [2024-07-15 11:34:29.007181] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:51.579 malloc0 00:15:51.579 [2024-07-15 11:34:29.034212] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:15:51.579 [2024-07-15 11:34:29.034410] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:51.838 11:34:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:51.838 11:34:29 nvmf_tcp.nvmf_tls -- target/tls.sh@252 -- # bdevperf_pid=85122 00:15:51.838 11:34:29 nvmf_tcp.nvmf_tls -- target/tls.sh@254 -- # waitforlisten 85122 /var/tmp/bdevperf.sock 00:15:51.838 11:34:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 85122 ']' 00:15:51.838 11:34:29 nvmf_tcp.nvmf_tls -- target/tls.sh@250 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:15:51.838 11:34:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:51.838 11:34:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:51.838 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:51.838 11:34:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:51.838 11:34:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:51.838 11:34:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:51.838 [2024-07-15 11:34:29.112529] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:15:51.838 [2024-07-15 11:34:29.112640] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85122 ] 00:15:51.838 [2024-07-15 11:34:29.244879] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:51.838 [2024-07-15 11:34:29.311147] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:52.769 11:34:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:52.770 11:34:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:15:52.770 11:34:30 nvmf_tcp.nvmf_tls -- target/tls.sh@255 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.6tWS54vE6Q 00:15:53.026 11:34:30 nvmf_tcp.nvmf_tls -- target/tls.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:15:53.283 [2024-07-15 11:34:30.669797] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:15:53.283 nvme0n1 00:15:53.540 11:34:30 nvmf_tcp.nvmf_tls -- target/tls.sh@260 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:15:53.540 Running I/O for 1 seconds... 00:15:54.471 00:15:54.471 Latency(us) 00:15:54.471 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:54.471 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:15:54.471 Verification LBA range: start 0x0 length 0x2000 00:15:54.471 nvme0n1 : 1.02 3737.68 14.60 0.00 0.00 33938.53 7357.91 42181.35 00:15:54.471 =================================================================================================================== 00:15:54.471 Total : 3737.68 14.60 0.00 0.00 33938.53 7357.91 42181.35 00:15:54.471 0 00:15:54.729 11:34:31 nvmf_tcp.nvmf_tls -- target/tls.sh@263 -- # rpc_cmd save_config 00:15:54.729 11:34:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:54.729 11:34:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:54.729 11:34:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:54.729 11:34:32 nvmf_tcp.nvmf_tls -- target/tls.sh@263 -- # tgtcfg='{ 00:15:54.729 "subsystems": [ 00:15:54.729 { 00:15:54.729 "subsystem": "keyring", 00:15:54.729 "config": [ 00:15:54.729 { 00:15:54.729 "method": "keyring_file_add_key", 00:15:54.729 "params": { 00:15:54.729 "name": "key0", 00:15:54.729 "path": "/tmp/tmp.6tWS54vE6Q" 00:15:54.729 } 00:15:54.729 } 00:15:54.729 ] 00:15:54.729 }, 00:15:54.729 { 00:15:54.729 "subsystem": "iobuf", 00:15:54.729 "config": [ 00:15:54.729 { 00:15:54.729 "method": "iobuf_set_options", 00:15:54.729 "params": { 00:15:54.729 "large_bufsize": 135168, 00:15:54.729 "large_pool_count": 1024, 00:15:54.729 "small_bufsize": 8192, 00:15:54.729 "small_pool_count": 8192 00:15:54.729 } 00:15:54.729 } 00:15:54.729 ] 00:15:54.729 }, 00:15:54.729 { 00:15:54.729 "subsystem": "sock", 00:15:54.729 "config": [ 00:15:54.729 { 00:15:54.729 "method": "sock_set_default_impl", 00:15:54.729 "params": { 00:15:54.729 "impl_name": "posix" 00:15:54.729 } 00:15:54.729 }, 00:15:54.729 { 00:15:54.729 "method": "sock_impl_set_options", 00:15:54.729 "params": { 00:15:54.729 "enable_ktls": false, 00:15:54.729 "enable_placement_id": 0, 00:15:54.729 "enable_quickack": false, 00:15:54.729 "enable_recv_pipe": true, 00:15:54.729 "enable_zerocopy_send_client": false, 00:15:54.729 "enable_zerocopy_send_server": true, 00:15:54.729 "impl_name": "ssl", 00:15:54.729 "recv_buf_size": 4096, 00:15:54.729 "send_buf_size": 4096, 00:15:54.729 "tls_version": 0, 00:15:54.729 "zerocopy_threshold": 0 00:15:54.729 } 00:15:54.729 }, 00:15:54.729 { 00:15:54.729 "method": "sock_impl_set_options", 00:15:54.729 "params": { 00:15:54.729 "enable_ktls": false, 00:15:54.729 "enable_placement_id": 0, 00:15:54.729 "enable_quickack": false, 00:15:54.729 "enable_recv_pipe": true, 00:15:54.729 "enable_zerocopy_send_client": false, 00:15:54.729 "enable_zerocopy_send_server": true, 00:15:54.729 "impl_name": "posix", 00:15:54.729 "recv_buf_size": 2097152, 00:15:54.729 "send_buf_size": 2097152, 00:15:54.729 "tls_version": 0, 00:15:54.729 "zerocopy_threshold": 0 00:15:54.729 } 00:15:54.729 } 00:15:54.729 ] 00:15:54.729 }, 00:15:54.729 { 00:15:54.729 "subsystem": "vmd", 00:15:54.729 "config": [] 00:15:54.729 }, 00:15:54.729 { 00:15:54.729 "subsystem": "accel", 00:15:54.729 "config": [ 00:15:54.729 { 00:15:54.729 "method": "accel_set_options", 00:15:54.729 "params": { 00:15:54.729 "buf_count": 2048, 00:15:54.729 "large_cache_size": 16, 00:15:54.729 "sequence_count": 2048, 00:15:54.729 "small_cache_size": 128, 00:15:54.729 "task_count": 2048 00:15:54.729 } 00:15:54.729 } 00:15:54.729 ] 00:15:54.729 }, 00:15:54.729 { 00:15:54.729 "subsystem": "bdev", 00:15:54.729 "config": [ 00:15:54.729 { 00:15:54.729 "method": "bdev_set_options", 00:15:54.729 "params": { 00:15:54.729 "bdev_auto_examine": true, 00:15:54.729 "bdev_io_cache_size": 256, 00:15:54.729 "bdev_io_pool_size": 65535, 00:15:54.729 "iobuf_large_cache_size": 16, 00:15:54.729 "iobuf_small_cache_size": 128 00:15:54.729 } 00:15:54.729 }, 00:15:54.729 { 00:15:54.729 "method": "bdev_raid_set_options", 00:15:54.729 "params": { 00:15:54.729 "process_window_size_kb": 1024 00:15:54.729 } 00:15:54.729 }, 00:15:54.729 { 00:15:54.729 "method": "bdev_iscsi_set_options", 00:15:54.729 "params": { 00:15:54.730 "timeout_sec": 30 00:15:54.730 } 00:15:54.730 }, 00:15:54.730 { 00:15:54.730 "method": "bdev_nvme_set_options", 00:15:54.730 "params": { 00:15:54.730 "action_on_timeout": "none", 00:15:54.730 "allow_accel_sequence": false, 00:15:54.730 "arbitration_burst": 0, 00:15:54.730 "bdev_retry_count": 3, 00:15:54.730 "ctrlr_loss_timeout_sec": 0, 00:15:54.730 "delay_cmd_submit": true, 00:15:54.730 "dhchap_dhgroups": [ 00:15:54.730 "null", 00:15:54.730 "ffdhe2048", 00:15:54.730 "ffdhe3072", 00:15:54.730 "ffdhe4096", 00:15:54.730 "ffdhe6144", 00:15:54.730 "ffdhe8192" 00:15:54.730 ], 00:15:54.730 "dhchap_digests": [ 00:15:54.730 "sha256", 00:15:54.730 "sha384", 00:15:54.730 "sha512" 00:15:54.730 ], 00:15:54.730 "disable_auto_failback": false, 00:15:54.730 "fast_io_fail_timeout_sec": 0, 00:15:54.730 "generate_uuids": false, 00:15:54.730 "high_priority_weight": 0, 00:15:54.730 "io_path_stat": false, 00:15:54.730 "io_queue_requests": 0, 00:15:54.730 "keep_alive_timeout_ms": 10000, 00:15:54.730 "low_priority_weight": 0, 00:15:54.730 "medium_priority_weight": 0, 00:15:54.730 "nvme_adminq_poll_period_us": 10000, 00:15:54.730 "nvme_error_stat": false, 00:15:54.730 "nvme_ioq_poll_period_us": 0, 00:15:54.730 "rdma_cm_event_timeout_ms": 0, 00:15:54.730 "rdma_max_cq_size": 0, 00:15:54.730 "rdma_srq_size": 0, 00:15:54.730 "reconnect_delay_sec": 0, 00:15:54.730 "timeout_admin_us": 0, 00:15:54.730 "timeout_us": 0, 00:15:54.730 "transport_ack_timeout": 0, 00:15:54.730 "transport_retry_count": 4, 00:15:54.730 "transport_tos": 0 00:15:54.730 } 00:15:54.730 }, 00:15:54.730 { 00:15:54.730 "method": "bdev_nvme_set_hotplug", 00:15:54.730 "params": { 00:15:54.730 "enable": false, 00:15:54.730 "period_us": 100000 00:15:54.730 } 00:15:54.730 }, 00:15:54.730 { 00:15:54.730 "method": "bdev_malloc_create", 00:15:54.730 "params": { 00:15:54.730 "block_size": 4096, 00:15:54.730 "name": "malloc0", 00:15:54.730 "num_blocks": 8192, 00:15:54.730 "optimal_io_boundary": 0, 00:15:54.730 "physical_block_size": 4096, 00:15:54.730 "uuid": "fb50d90a-c0ec-411a-b5fd-91ce0c1a15a7" 00:15:54.730 } 00:15:54.730 }, 00:15:54.730 { 00:15:54.730 "method": "bdev_wait_for_examine" 00:15:54.730 } 00:15:54.730 ] 00:15:54.730 }, 00:15:54.730 { 00:15:54.730 "subsystem": "nbd", 00:15:54.730 "config": [] 00:15:54.730 }, 00:15:54.730 { 00:15:54.730 "subsystem": "scheduler", 00:15:54.730 "config": [ 00:15:54.730 { 00:15:54.730 "method": "framework_set_scheduler", 00:15:54.730 "params": { 00:15:54.730 "name": "static" 00:15:54.730 } 00:15:54.730 } 00:15:54.730 ] 00:15:54.730 }, 00:15:54.730 { 00:15:54.730 "subsystem": "nvmf", 00:15:54.730 "config": [ 00:15:54.730 { 00:15:54.730 "method": "nvmf_set_config", 00:15:54.730 "params": { 00:15:54.730 "admin_cmd_passthru": { 00:15:54.730 "identify_ctrlr": false 00:15:54.730 }, 00:15:54.730 "discovery_filter": "match_any" 00:15:54.730 } 00:15:54.730 }, 00:15:54.730 { 00:15:54.730 "method": "nvmf_set_max_subsystems", 00:15:54.730 "params": { 00:15:54.730 "max_subsystems": 1024 00:15:54.730 } 00:15:54.730 }, 00:15:54.730 { 00:15:54.730 "method": "nvmf_set_crdt", 00:15:54.730 "params": { 00:15:54.730 "crdt1": 0, 00:15:54.730 "crdt2": 0, 00:15:54.730 "crdt3": 0 00:15:54.730 } 00:15:54.730 }, 00:15:54.730 { 00:15:54.730 "method": "nvmf_create_transport", 00:15:54.730 "params": { 00:15:54.730 "abort_timeout_sec": 1, 00:15:54.730 "ack_timeout": 0, 00:15:54.730 "buf_cache_size": 4294967295, 00:15:54.730 "c2h_success": false, 00:15:54.730 "data_wr_pool_size": 0, 00:15:54.730 "dif_insert_or_strip": false, 00:15:54.730 "in_capsule_data_size": 4096, 00:15:54.730 "io_unit_size": 131072, 00:15:54.730 "max_aq_depth": 128, 00:15:54.730 "max_io_qpairs_per_ctrlr": 127, 00:15:54.730 "max_io_size": 131072, 00:15:54.730 "max_queue_depth": 128, 00:15:54.730 "num_shared_buffers": 511, 00:15:54.730 "sock_priority": 0, 00:15:54.730 "trtype": "TCP", 00:15:54.730 "zcopy": false 00:15:54.730 } 00:15:54.730 }, 00:15:54.730 { 00:15:54.730 "method": "nvmf_create_subsystem", 00:15:54.730 "params": { 00:15:54.730 "allow_any_host": false, 00:15:54.730 "ana_reporting": false, 00:15:54.730 "max_cntlid": 65519, 00:15:54.730 "max_namespaces": 32, 00:15:54.730 "min_cntlid": 1, 00:15:54.730 "model_number": "SPDK bdev Controller", 00:15:54.730 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:54.730 "serial_number": "00000000000000000000" 00:15:54.730 } 00:15:54.730 }, 00:15:54.730 { 00:15:54.730 "method": "nvmf_subsystem_add_host", 00:15:54.730 "params": { 00:15:54.730 "host": "nqn.2016-06.io.spdk:host1", 00:15:54.730 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:54.730 "psk": "key0" 00:15:54.730 } 00:15:54.730 }, 00:15:54.730 { 00:15:54.730 "method": "nvmf_subsystem_add_ns", 00:15:54.730 "params": { 00:15:54.730 "namespace": { 00:15:54.730 "bdev_name": "malloc0", 00:15:54.730 "nguid": "FB50D90AC0EC411AB5FD91CE0C1A15A7", 00:15:54.730 "no_auto_visible": false, 00:15:54.730 "nsid": 1, 00:15:54.730 "uuid": "fb50d90a-c0ec-411a-b5fd-91ce0c1a15a7" 00:15:54.730 }, 00:15:54.730 "nqn": "nqn.2016-06.io.spdk:cnode1" 00:15:54.730 } 00:15:54.730 }, 00:15:54.730 { 00:15:54.730 "method": "nvmf_subsystem_add_listener", 00:15:54.730 "params": { 00:15:54.730 "listen_address": { 00:15:54.730 "adrfam": "IPv4", 00:15:54.730 "traddr": "10.0.0.2", 00:15:54.730 "trsvcid": "4420", 00:15:54.730 "trtype": "TCP" 00:15:54.730 }, 00:15:54.730 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:54.730 "secure_channel": true 00:15:54.730 } 00:15:54.730 } 00:15:54.730 ] 00:15:54.730 } 00:15:54.730 ] 00:15:54.730 }' 00:15:54.730 11:34:32 nvmf_tcp.nvmf_tls -- target/tls.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:15:54.988 11:34:32 nvmf_tcp.nvmf_tls -- target/tls.sh@264 -- # bperfcfg='{ 00:15:54.988 "subsystems": [ 00:15:54.988 { 00:15:54.988 "subsystem": "keyring", 00:15:54.988 "config": [ 00:15:54.988 { 00:15:54.988 "method": "keyring_file_add_key", 00:15:54.988 "params": { 00:15:54.988 "name": "key0", 00:15:54.988 "path": "/tmp/tmp.6tWS54vE6Q" 00:15:54.988 } 00:15:54.988 } 00:15:54.988 ] 00:15:54.988 }, 00:15:54.988 { 00:15:54.988 "subsystem": "iobuf", 00:15:54.988 "config": [ 00:15:54.988 { 00:15:54.988 "method": "iobuf_set_options", 00:15:54.988 "params": { 00:15:54.988 "large_bufsize": 135168, 00:15:54.988 "large_pool_count": 1024, 00:15:54.988 "small_bufsize": 8192, 00:15:54.988 "small_pool_count": 8192 00:15:54.988 } 00:15:54.988 } 00:15:54.988 ] 00:15:54.988 }, 00:15:54.988 { 00:15:54.988 "subsystem": "sock", 00:15:54.988 "config": [ 00:15:54.988 { 00:15:54.988 "method": "sock_set_default_impl", 00:15:54.988 "params": { 00:15:54.989 "impl_name": "posix" 00:15:54.989 } 00:15:54.989 }, 00:15:54.989 { 00:15:54.989 "method": "sock_impl_set_options", 00:15:54.989 "params": { 00:15:54.989 "enable_ktls": false, 00:15:54.989 "enable_placement_id": 0, 00:15:54.989 "enable_quickack": false, 00:15:54.989 "enable_recv_pipe": true, 00:15:54.989 "enable_zerocopy_send_client": false, 00:15:54.989 "enable_zerocopy_send_server": true, 00:15:54.989 "impl_name": "ssl", 00:15:54.989 "recv_buf_size": 4096, 00:15:54.989 "send_buf_size": 4096, 00:15:54.989 "tls_version": 0, 00:15:54.989 "zerocopy_threshold": 0 00:15:54.989 } 00:15:54.989 }, 00:15:54.989 { 00:15:54.989 "method": "sock_impl_set_options", 00:15:54.989 "params": { 00:15:54.989 "enable_ktls": false, 00:15:54.989 "enable_placement_id": 0, 00:15:54.989 "enable_quickack": false, 00:15:54.989 "enable_recv_pipe": true, 00:15:54.989 "enable_zerocopy_send_client": false, 00:15:54.989 "enable_zerocopy_send_server": true, 00:15:54.989 "impl_name": "posix", 00:15:54.989 "recv_buf_size": 2097152, 00:15:54.989 "send_buf_size": 2097152, 00:15:54.989 "tls_version": 0, 00:15:54.989 "zerocopy_threshold": 0 00:15:54.989 } 00:15:54.989 } 00:15:54.989 ] 00:15:54.989 }, 00:15:54.989 { 00:15:54.989 "subsystem": "vmd", 00:15:54.989 "config": [] 00:15:54.989 }, 00:15:54.989 { 00:15:54.989 "subsystem": "accel", 00:15:54.989 "config": [ 00:15:54.989 { 00:15:54.989 "method": "accel_set_options", 00:15:54.989 "params": { 00:15:54.989 "buf_count": 2048, 00:15:54.989 "large_cache_size": 16, 00:15:54.989 "sequence_count": 2048, 00:15:54.989 "small_cache_size": 128, 00:15:54.989 "task_count": 2048 00:15:54.989 } 00:15:54.989 } 00:15:54.989 ] 00:15:54.989 }, 00:15:54.989 { 00:15:54.989 "subsystem": "bdev", 00:15:54.989 "config": [ 00:15:54.989 { 00:15:54.989 "method": "bdev_set_options", 00:15:54.989 "params": { 00:15:54.989 "bdev_auto_examine": true, 00:15:54.989 "bdev_io_cache_size": 256, 00:15:54.989 "bdev_io_pool_size": 65535, 00:15:54.989 "iobuf_large_cache_size": 16, 00:15:54.989 "iobuf_small_cache_size": 128 00:15:54.989 } 00:15:54.989 }, 00:15:54.989 { 00:15:54.989 "method": "bdev_raid_set_options", 00:15:54.989 "params": { 00:15:54.989 "process_window_size_kb": 1024 00:15:54.989 } 00:15:54.989 }, 00:15:54.989 { 00:15:54.989 "method": "bdev_iscsi_set_options", 00:15:54.989 "params": { 00:15:54.989 "timeout_sec": 30 00:15:54.989 } 00:15:54.989 }, 00:15:54.989 { 00:15:54.989 "method": "bdev_nvme_set_options", 00:15:54.989 "params": { 00:15:54.989 "action_on_timeout": "none", 00:15:54.989 "allow_accel_sequence": false, 00:15:54.989 "arbitration_burst": 0, 00:15:54.989 "bdev_retry_count": 3, 00:15:54.989 "ctrlr_loss_timeout_sec": 0, 00:15:54.989 "delay_cmd_submit": true, 00:15:54.989 "dhchap_dhgroups": [ 00:15:54.989 "null", 00:15:54.989 "ffdhe2048", 00:15:54.989 "ffdhe3072", 00:15:54.989 "ffdhe4096", 00:15:54.989 "ffdhe6144", 00:15:54.989 "ffdhe8192" 00:15:54.989 ], 00:15:54.989 "dhchap_digests": [ 00:15:54.989 "sha256", 00:15:54.989 "sha384", 00:15:54.989 "sha512" 00:15:54.989 ], 00:15:54.989 "disable_auto_failback": false, 00:15:54.989 "fast_io_fail_timeout_sec": 0, 00:15:54.989 "generate_uuids": false, 00:15:54.989 "high_priority_weight": 0, 00:15:54.989 "io_path_stat": false, 00:15:54.989 "io_queue_requests": 512, 00:15:54.989 "keep_alive_timeout_ms": 10000, 00:15:54.989 "low_priority_weight": 0, 00:15:54.989 "medium_priority_weight": 0, 00:15:54.989 "nvme_adminq_poll_period_us": 10000, 00:15:54.989 "nvme_error_stat": false, 00:15:54.989 "nvme_ioq_poll_period_us": 0, 00:15:54.989 "rdma_cm_event_timeout_ms": 0, 00:15:54.989 "rdma_max_cq_size": 0, 00:15:54.989 "rdma_srq_size": 0, 00:15:54.989 "reconnect_delay_sec": 0, 00:15:54.989 "timeout_admin_us": 0, 00:15:54.989 "timeout_us": 0, 00:15:54.989 "transport_ack_timeout": 0, 00:15:54.989 "transport_retry_count": 4, 00:15:54.989 "transport_tos": 0 00:15:54.989 } 00:15:54.989 }, 00:15:54.989 { 00:15:54.989 "method": "bdev_nvme_attach_controller", 00:15:54.989 "params": { 00:15:54.989 "adrfam": "IPv4", 00:15:54.989 "ctrlr_loss_timeout_sec": 0, 00:15:54.989 "ddgst": false, 00:15:54.989 "fast_io_fail_timeout_sec": 0, 00:15:54.989 "hdgst": false, 00:15:54.989 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:54.989 "name": "nvme0", 00:15:54.989 "prchk_guard": false, 00:15:54.989 "prchk_reftag": false, 00:15:54.989 "psk": "key0", 00:15:54.989 "reconnect_delay_sec": 0, 00:15:54.989 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:54.989 "traddr": "10.0.0.2", 00:15:54.989 "trsvcid": "4420", 00:15:54.989 "trtype": "TCP" 00:15:54.989 } 00:15:54.989 }, 00:15:54.989 { 00:15:54.989 "method": "bdev_nvme_set_hotplug", 00:15:54.989 "params": { 00:15:54.989 "enable": false, 00:15:54.989 "period_us": 100000 00:15:54.989 } 00:15:54.989 }, 00:15:54.989 { 00:15:54.989 "method": "bdev_enable_histogram", 00:15:54.989 "params": { 00:15:54.989 "enable": true, 00:15:54.989 "name": "nvme0n1" 00:15:54.989 } 00:15:54.989 }, 00:15:54.989 { 00:15:54.989 "method": "bdev_wait_for_examine" 00:15:54.989 } 00:15:54.989 ] 00:15:54.989 }, 00:15:54.989 { 00:15:54.989 "subsystem": "nbd", 00:15:54.989 "config": [] 00:15:54.989 } 00:15:54.989 ] 00:15:54.989 }' 00:15:54.989 11:34:32 nvmf_tcp.nvmf_tls -- target/tls.sh@266 -- # killprocess 85122 00:15:54.989 11:34:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 85122 ']' 00:15:54.989 11:34:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 85122 00:15:54.989 11:34:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:15:54.989 11:34:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:54.989 11:34:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 85122 00:15:54.989 11:34:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:15:54.989 killing process with pid 85122 00:15:54.989 11:34:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:15:54.989 11:34:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 85122' 00:15:54.989 Received shutdown signal, test time was about 1.000000 seconds 00:15:54.989 00:15:54.989 Latency(us) 00:15:54.989 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:54.989 =================================================================================================================== 00:15:54.989 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:54.989 11:34:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 85122 00:15:54.989 11:34:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 85122 00:15:55.247 11:34:32 nvmf_tcp.nvmf_tls -- target/tls.sh@267 -- # killprocess 85071 00:15:55.247 11:34:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 85071 ']' 00:15:55.247 11:34:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 85071 00:15:55.247 11:34:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:15:55.247 11:34:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:55.247 11:34:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 85071 00:15:55.247 11:34:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:15:55.247 11:34:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:15:55.247 killing process with pid 85071 00:15:55.247 11:34:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 85071' 00:15:55.247 11:34:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 85071 00:15:55.247 11:34:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 85071 00:15:55.504 11:34:32 nvmf_tcp.nvmf_tls -- target/tls.sh@269 -- # nvmfappstart -c /dev/fd/62 00:15:55.504 11:34:32 nvmf_tcp.nvmf_tls -- target/tls.sh@269 -- # echo '{ 00:15:55.504 "subsystems": [ 00:15:55.504 { 00:15:55.504 "subsystem": "keyring", 00:15:55.504 "config": [ 00:15:55.504 { 00:15:55.504 "method": "keyring_file_add_key", 00:15:55.504 "params": { 00:15:55.504 "name": "key0", 00:15:55.504 "path": "/tmp/tmp.6tWS54vE6Q" 00:15:55.504 } 00:15:55.504 } 00:15:55.504 ] 00:15:55.504 }, 00:15:55.504 { 00:15:55.504 "subsystem": "iobuf", 00:15:55.504 "config": [ 00:15:55.504 { 00:15:55.504 "method": "iobuf_set_options", 00:15:55.504 "params": { 00:15:55.504 "large_bufsize": 135168, 00:15:55.504 "large_pool_count": 1024, 00:15:55.504 "small_bufsize": 8192, 00:15:55.504 "small_pool_count": 8192 00:15:55.504 } 00:15:55.504 } 00:15:55.504 ] 00:15:55.504 }, 00:15:55.504 { 00:15:55.504 "subsystem": "sock", 00:15:55.504 "config": [ 00:15:55.504 { 00:15:55.504 "method": "sock_set_default_impl", 00:15:55.504 "params": { 00:15:55.504 "impl_name": "posix" 00:15:55.504 } 00:15:55.504 }, 00:15:55.504 { 00:15:55.504 "method": "sock_impl_set_options", 00:15:55.504 "params": { 00:15:55.504 "enable_ktls": false, 00:15:55.504 "enable_placement_id": 0, 00:15:55.504 "enable_quickack": false, 00:15:55.504 "enable_recv_pipe": true, 00:15:55.504 "enable_zerocopy_send_client": false, 00:15:55.504 "enable_zerocopy_send_server": true, 00:15:55.504 "impl_name": "ssl", 00:15:55.504 "recv_buf_size": 4096, 00:15:55.504 "send_buf_size": 4096, 00:15:55.504 "tls_version": 0, 00:15:55.504 "zerocopy_threshold": 0 00:15:55.504 } 00:15:55.504 }, 00:15:55.504 { 00:15:55.504 "method": "sock_impl_set_options", 00:15:55.504 "params": { 00:15:55.504 "enable_ktls": false, 00:15:55.504 "enable_placement_id": 0, 00:15:55.504 "enable_quickack": false, 00:15:55.504 "enable_recv_pipe": true, 00:15:55.504 "enable_zerocopy_send_client": false, 00:15:55.504 "enable_zerocopy_send_server": true, 00:15:55.504 "impl_name": "posix", 00:15:55.504 "recv_buf_size": 2097152, 00:15:55.504 "send_buf_size": 2097152, 00:15:55.504 "tls_version": 0, 00:15:55.504 "zerocopy_threshold": 0 00:15:55.504 } 00:15:55.504 } 00:15:55.504 ] 00:15:55.504 }, 00:15:55.504 { 00:15:55.504 "subsystem": "vmd", 00:15:55.504 "config": [] 00:15:55.504 }, 00:15:55.504 { 00:15:55.504 "subsystem": "accel", 00:15:55.504 "config": [ 00:15:55.504 { 00:15:55.504 "method": "accel_set_options", 00:15:55.504 "params": { 00:15:55.504 "buf_count": 2048, 00:15:55.504 "large_cache_size": 16, 00:15:55.504 "sequence_count": 2048, 00:15:55.504 "small_cache_size": 128, 00:15:55.504 "task_count": 2048 00:15:55.504 } 00:15:55.504 } 00:15:55.504 ] 00:15:55.504 }, 00:15:55.504 { 00:15:55.504 "subsystem": "bdev", 00:15:55.504 "config": [ 00:15:55.504 { 00:15:55.504 "method": "bdev_set_options", 00:15:55.504 "params": { 00:15:55.504 "bdev_auto_examine": true, 00:15:55.504 "bdev_io_cache_size": 256, 00:15:55.504 "bdev_io_pool_size": 65535, 00:15:55.504 "iobuf_large_cache_size": 16, 00:15:55.504 "iobuf_small_cache_size": 128 00:15:55.504 } 00:15:55.504 }, 00:15:55.504 { 00:15:55.504 "method": "bdev_raid_set_options", 00:15:55.504 "params": { 00:15:55.504 "process_window_size_kb": 1024 00:15:55.504 } 00:15:55.504 }, 00:15:55.504 { 00:15:55.504 "method": "bdev_iscsi_set_options", 00:15:55.504 "params": { 00:15:55.504 "timeout_sec": 30 00:15:55.504 } 00:15:55.504 }, 00:15:55.504 { 00:15:55.504 "method": "bdev_nvme_set_options", 00:15:55.504 "params": { 00:15:55.504 "action_on_timeout": "none", 00:15:55.504 "allow_accel_sequence": false, 00:15:55.504 "arbitration_burst": 0, 00:15:55.504 "bdev_retry_count": 3, 00:15:55.504 "ctrlr_loss_timeout_sec": 0, 00:15:55.504 "delay_cmd_submit": true, 00:15:55.504 "dhchap_dhgroups": [ 00:15:55.504 "null", 00:15:55.504 "ffdhe2048", 00:15:55.504 "ffdhe3072", 00:15:55.504 "ffdhe4096", 00:15:55.504 "ffdhe6144", 00:15:55.504 "ffdhe8192" 00:15:55.504 ], 00:15:55.504 "dhchap_digests": [ 00:15:55.504 "sha256", 00:15:55.504 "sha384", 00:15:55.504 "sha512" 00:15:55.504 ], 00:15:55.504 "disable_auto_failback": false, 00:15:55.504 "fast_io_fail_timeout_sec": 0, 00:15:55.504 "generate_uuids": false, 00:15:55.504 "high_priority_weight": 0, 00:15:55.504 "io_path_stat": false, 00:15:55.504 "io_queue_requests": 0, 00:15:55.504 "keep_alive_timeout_ms": 10000, 00:15:55.504 "low_priority_weight": 0, 00:15:55.504 "medium_priority_weight": 0, 00:15:55.504 "nvme_adminq_poll_period_us": 10000, 00:15:55.504 "nvme_error_stat": false, 00:15:55.504 "nvme_ioq_poll_period_us": 0, 00:15:55.504 "rdma_cm_event_timeout_ms": 0, 00:15:55.504 "rdma_max_cq_size": 0, 00:15:55.504 "rdma_srq_size": 0, 00:15:55.504 "reconnect_delay_sec": 0, 00:15:55.504 "timeout_admin_us": 0, 00:15:55.504 "timeout_us": 0, 00:15:55.504 "transport_ack_timeout": 0, 00:15:55.504 "transport_retry_count": 4, 00:15:55.504 "transport_tos": 0 00:15:55.504 } 00:15:55.504 }, 00:15:55.504 { 00:15:55.504 "method": "bdev_nvme_set_hotplug", 00:15:55.504 "params": { 00:15:55.504 "enable": false, 00:15:55.504 "period_us": 100000 00:15:55.505 } 00:15:55.505 }, 00:15:55.505 { 00:15:55.505 "method": "bdev_malloc_create", 00:15:55.505 "params": { 00:15:55.505 "block_size": 4096, 00:15:55.505 "name": "malloc0", 00:15:55.505 "num_blocks": 8192, 00:15:55.505 "optimal_io_boundary": 0, 00:15:55.505 "physical_block_size": 4096, 00:15:55.505 "uuid": "fb50d90a-c0ec-411a-b5fd-91ce0c1a15a7" 00:15:55.505 } 00:15:55.505 }, 00:15:55.505 { 00:15:55.505 "method": "bdev_wait_for_examine" 00:15:55.505 } 00:15:55.505 ] 00:15:55.505 }, 00:15:55.505 { 00:15:55.505 "subsystem": "nbd", 00:15:55.505 "config": [] 00:15:55.505 }, 00:15:55.505 { 00:15:55.505 "subsystem": "scheduler", 00:15:55.505 "config": [ 00:15:55.505 { 00:15:55.505 "method": "framework_set_scheduler", 00:15:55.505 "params": { 00:15:55.505 "name": "static" 00:15:55.505 } 00:15:55.505 } 00:15:55.505 ] 00:15:55.505 }, 00:15:55.505 { 00:15:55.505 "subsystem": "nvmf", 00:15:55.505 "config": [ 00:15:55.505 { 00:15:55.505 "method": "nvmf_set_config", 00:15:55.505 "params": { 00:15:55.505 "admin_cmd_passthru": { 00:15:55.505 "identify_ctrlr": false 00:15:55.505 }, 00:15:55.505 "discovery_filter": "match_any" 00:15:55.505 } 00:15:55.505 }, 00:15:55.505 { 00:15:55.505 "method": "nvmf_set_max_subsystems", 00:15:55.505 "params": { 00:15:55.505 "max_subsystems": 1024 00:15:55.505 } 00:15:55.505 }, 00:15:55.505 { 00:15:55.505 "method": "nvmf_set_crdt", 00:15:55.505 "params": { 00:15:55.505 "crdt1": 0, 00:15:55.505 "crdt2": 0, 00:15:55.505 "crdt3": 0 00:15:55.505 } 00:15:55.505 }, 00:15:55.505 { 00:15:55.505 "method": "nvmf_create_transport", 00:15:55.505 "params": { 00:15:55.505 "abort_timeout_sec": 1, 00:15:55.505 "ack_timeout": 0, 00:15:55.505 "buf_cache_size": 4294967295, 00:15:55.505 "c2h_success": false, 00:15:55.505 "data_wr_pool_size": 0, 00:15:55.505 "dif_insert_or_strip": false, 00:15:55.505 "in_capsule_data_size": 4096, 00:15:55.505 "io_unit_size": 131072, 00:15:55.505 "max_aq_depth": 128, 00:15:55.505 "max_io_qpairs_per_ctrlr": 127, 00:15:55.505 "max_io_size": 131072, 00:15:55.505 "max_queue_depth": 128, 00:15:55.505 "num_shared_buffers": 511, 00:15:55.505 "sock_priority": 0, 00:15:55.505 "trtype": "TCP", 00:15:55.505 "zcopy": false 00:15:55.505 } 00:15:55.505 }, 00:15:55.505 { 00:15:55.505 "method": "nvmf_create_subsystem", 00:15:55.505 "params": { 00:15:55.505 "allow_any_host": false, 00:15:55.505 "ana_reporting": false, 00:15:55.505 "max_cntlid": 65519, 00:15:55.505 "max_namespaces": 32, 00:15:55.505 "min_cntlid": 1, 00:15:55.505 "model_number": "SPDK bdev Controller", 00:15:55.505 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:55.505 "serial_number": "00000000000000000000" 00:15:55.505 } 00:15:55.505 }, 00:15:55.505 { 00:15:55.505 "method": "nvmf_subsystem_add_host", 00:15:55.505 "params": { 00:15:55.505 "host": "nqn.2016-06.io.spdk:host1", 00:15:55.505 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:55.505 "psk": "key0" 00:15:55.505 } 00:15:55.505 }, 00:15:55.505 { 00:15:55.505 "method": "nvmf_subsystem_add_ns", 00:15:55.505 "params": { 00:15:55.505 "namespace": { 00:15:55.505 "bdev_name": "malloc0", 00:15:55.505 "nguid": "FB50D90AC0EC411AB5FD91CE0C1A15A7", 00:15:55.505 "no_auto_visible": false, 00:15:55.505 "nsid": 1, 00:15:55.505 "uuid": "fb50d90a-c0ec-411a-b5fd-91ce0c1a15a7" 00:15:55.505 }, 00:15:55.505 "nqn": "nqn.2016-06.io.spdk:cnode1" 00:15:55.505 } 00:15:55.505 }, 00:15:55.505 { 00:15:55.505 "method": "nvmf_subsystem_add_listener", 00:15:55.505 "params": { 00:15:55.505 "listen_address": { 00:15:55.505 "adrfam": "IPv4", 00:15:55.505 "traddr": "10.0.0.2", 00:15:55.505 "trsvcid": "4420", 00:15:55.505 "trtype": "TCP" 00:15:55.505 }, 00:15:55.505 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:55.505 "secure_channel": true 00:15:55.505 } 00:15:55.505 } 00:15:55.505 ] 00:15:55.505 } 00:15:55.505 ] 00:15:55.505 }' 00:15:55.505 11:34:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:55.505 11:34:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:55.505 11:34:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:55.505 11:34:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=85207 00:15:55.505 11:34:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:15:55.505 11:34:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 85207 00:15:55.505 11:34:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 85207 ']' 00:15:55.505 11:34:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:55.505 11:34:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:55.505 11:34:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:55.505 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:55.505 11:34:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:55.505 11:34:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:55.505 [2024-07-15 11:34:32.838690] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:15:55.505 [2024-07-15 11:34:32.838794] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:55.505 [2024-07-15 11:34:32.976824] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:55.763 [2024-07-15 11:34:33.061070] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:55.763 [2024-07-15 11:34:33.061118] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:55.763 [2024-07-15 11:34:33.061129] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:55.763 [2024-07-15 11:34:33.061138] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:55.763 [2024-07-15 11:34:33.061145] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:55.763 [2024-07-15 11:34:33.061244] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:56.020 [2024-07-15 11:34:33.253974] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:56.020 [2024-07-15 11:34:33.285905] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:15:56.020 [2024-07-15 11:34:33.286173] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:56.585 11:34:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:56.585 11:34:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:15:56.585 11:34:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:56.585 11:34:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:56.585 11:34:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:56.585 11:34:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:56.585 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:56.585 11:34:33 nvmf_tcp.nvmf_tls -- target/tls.sh@272 -- # bdevperf_pid=85251 00:15:56.585 11:34:33 nvmf_tcp.nvmf_tls -- target/tls.sh@273 -- # waitforlisten 85251 /var/tmp/bdevperf.sock 00:15:56.585 11:34:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 85251 ']' 00:15:56.585 11:34:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:56.585 11:34:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:56.586 11:34:33 nvmf_tcp.nvmf_tls -- target/tls.sh@270 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:15:56.586 11:34:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:56.586 11:34:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:56.586 11:34:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:56.586 11:34:33 nvmf_tcp.nvmf_tls -- target/tls.sh@270 -- # echo '{ 00:15:56.586 "subsystems": [ 00:15:56.586 { 00:15:56.586 "subsystem": "keyring", 00:15:56.586 "config": [ 00:15:56.586 { 00:15:56.586 "method": "keyring_file_add_key", 00:15:56.586 "params": { 00:15:56.586 "name": "key0", 00:15:56.586 "path": "/tmp/tmp.6tWS54vE6Q" 00:15:56.586 } 00:15:56.586 } 00:15:56.586 ] 00:15:56.586 }, 00:15:56.586 { 00:15:56.586 "subsystem": "iobuf", 00:15:56.586 "config": [ 00:15:56.586 { 00:15:56.586 "method": "iobuf_set_options", 00:15:56.586 "params": { 00:15:56.586 "large_bufsize": 135168, 00:15:56.586 "large_pool_count": 1024, 00:15:56.586 "small_bufsize": 8192, 00:15:56.586 "small_pool_count": 8192 00:15:56.586 } 00:15:56.586 } 00:15:56.586 ] 00:15:56.586 }, 00:15:56.586 { 00:15:56.586 "subsystem": "sock", 00:15:56.586 "config": [ 00:15:56.586 { 00:15:56.586 "method": "sock_set_default_impl", 00:15:56.586 "params": { 00:15:56.586 "impl_name": "posix" 00:15:56.586 } 00:15:56.586 }, 00:15:56.586 { 00:15:56.586 "method": "sock_impl_set_options", 00:15:56.586 "params": { 00:15:56.586 "enable_ktls": false, 00:15:56.586 "enable_placement_id": 0, 00:15:56.586 "enable_quickack": false, 00:15:56.586 "enable_recv_pipe": true, 00:15:56.586 "enable_zerocopy_send_client": false, 00:15:56.586 "enable_zerocopy_send_server": true, 00:15:56.586 "impl_name": "ssl", 00:15:56.586 "recv_buf_size": 4096, 00:15:56.586 "send_buf_size": 4096, 00:15:56.586 "tls_version": 0, 00:15:56.586 "zerocopy_threshold": 0 00:15:56.586 } 00:15:56.586 }, 00:15:56.586 { 00:15:56.586 "method": "sock_impl_set_options", 00:15:56.586 "params": { 00:15:56.586 "enable_ktls": false, 00:15:56.586 "enable_placement_id": 0, 00:15:56.586 "enable_quickack": false, 00:15:56.586 "enable_recv_pipe": true, 00:15:56.586 "enable_zerocopy_send_client": false, 00:15:56.586 "enable_zerocopy_send_server": true, 00:15:56.586 "impl_name": "posix", 00:15:56.586 "recv_buf_size": 2097152, 00:15:56.586 "send_buf_size": 2097152, 00:15:56.586 "tls_version": 0, 00:15:56.586 "zerocopy_threshold": 0 00:15:56.586 } 00:15:56.586 } 00:15:56.586 ] 00:15:56.586 }, 00:15:56.586 { 00:15:56.586 "subsystem": "vmd", 00:15:56.586 "config": [] 00:15:56.586 }, 00:15:56.586 { 00:15:56.586 "subsystem": "accel", 00:15:56.586 "config": [ 00:15:56.586 { 00:15:56.586 "method": "accel_set_options", 00:15:56.586 "params": { 00:15:56.586 "buf_count": 2048, 00:15:56.586 "large_cache_size": 16, 00:15:56.586 "sequence_count": 2048, 00:15:56.586 "small_cache_size": 128, 00:15:56.586 "task_count": 2048 00:15:56.586 } 00:15:56.586 } 00:15:56.586 ] 00:15:56.586 }, 00:15:56.586 { 00:15:56.586 "subsystem": "bdev", 00:15:56.586 "config": [ 00:15:56.586 { 00:15:56.586 "method": "bdev_set_options", 00:15:56.586 "params": { 00:15:56.586 "bdev_auto_examine": true, 00:15:56.586 "bdev_io_cache_size": 256, 00:15:56.586 "bdev_io_pool_size": 65535, 00:15:56.586 "iobuf_large_cache_size": 16, 00:15:56.586 "iobuf_small_cache_size": 128 00:15:56.586 } 00:15:56.586 }, 00:15:56.586 { 00:15:56.586 "method": "bdev_raid_set_options", 00:15:56.586 "params": { 00:15:56.586 "process_window_size_kb": 1024 00:15:56.586 } 00:15:56.586 }, 00:15:56.586 { 00:15:56.586 "method": "bdev_iscsi_set_options", 00:15:56.586 "params": { 00:15:56.586 "timeout_sec": 30 00:15:56.586 } 00:15:56.586 }, 00:15:56.586 { 00:15:56.586 "method": "bdev_nvme_set_options", 00:15:56.586 "params": { 00:15:56.586 "action_on_timeout": "none", 00:15:56.586 "allow_accel_sequence": false, 00:15:56.586 "arbitration_burst": 0, 00:15:56.586 "bdev_retry_count": 3, 00:15:56.586 "ctrlr_loss_timeout_sec": 0, 00:15:56.586 "delay_cmd_submit": true, 00:15:56.586 "dhchap_dhgroups": [ 00:15:56.586 "null", 00:15:56.586 "ffdhe2048", 00:15:56.586 "ffdhe3072", 00:15:56.586 "ffdhe4096", 00:15:56.586 "ffdhe6144", 00:15:56.586 "ffdhe8192" 00:15:56.586 ], 00:15:56.586 "dhchap_digests": [ 00:15:56.586 "sha256", 00:15:56.586 "sha384", 00:15:56.586 "sha512" 00:15:56.586 ], 00:15:56.586 "disable_auto_failback": false, 00:15:56.586 "fast_io_fail_timeout_sec": 0, 00:15:56.586 "generate_uuids": false, 00:15:56.586 "high_priority_weight": 0, 00:15:56.586 "io_path_stat": false, 00:15:56.586 "io_queue_requests": 512, 00:15:56.586 "keep_alive_timeout_ms": 10000, 00:15:56.586 "low_priority_weight": 0, 00:15:56.586 "medium_priority_weight": 0, 00:15:56.586 "nvme_adminq_poll_period_us": 10000, 00:15:56.586 "nvme_error_stat": false, 00:15:56.586 "nvme_ioq_poll_period_us": 0, 00:15:56.586 "rdma_cm_event_timeout_ms": 0, 00:15:56.586 "rdma_max_cq_size": 0, 00:15:56.586 "rdma_srq_size": 0, 00:15:56.586 "reconnect_delay_sec": 0, 00:15:56.586 "timeout_admin_us": 0, 00:15:56.586 "timeout_us": 0, 00:15:56.586 "transport_ack_timeout": 0, 00:15:56.586 "transport_retry_count": 4, 00:15:56.586 "transport_tos": 0 00:15:56.586 } 00:15:56.586 }, 00:15:56.586 { 00:15:56.586 "method": "bdev_nvme_attach_controller", 00:15:56.586 "params": { 00:15:56.586 "adrfam": "IPv4", 00:15:56.586 "ctrlr_loss_timeout_sec": 0, 00:15:56.586 "ddgst": false, 00:15:56.586 "fast_io_fail_timeout_sec": 0, 00:15:56.586 "hdgst": false, 00:15:56.586 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:56.586 "name": "nvme0", 00:15:56.586 "prchk_guard": false, 00:15:56.586 "prchk_reftag": false, 00:15:56.586 "psk": "key0", 00:15:56.586 "reconnect_delay_sec": 0, 00:15:56.586 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:56.586 "traddr": "10.0.0.2", 00:15:56.586 "trsvcid": "4420", 00:15:56.586 "trtype": "TCP" 00:15:56.586 } 00:15:56.586 }, 00:15:56.586 { 00:15:56.586 "method": "bdev_nvme_set_hotplug", 00:15:56.586 "params": { 00:15:56.586 "enable": false, 00:15:56.586 "period_us": 100000 00:15:56.586 } 00:15:56.586 }, 00:15:56.586 { 00:15:56.586 "method": "bdev_enable_histogram", 00:15:56.586 "params": { 00:15:56.586 "enable": true, 00:15:56.586 "name": "nvme0n1" 00:15:56.586 } 00:15:56.586 }, 00:15:56.586 { 00:15:56.586 "method": "bdev_wait_for_examine" 00:15:56.586 } 00:15:56.586 ] 00:15:56.586 }, 00:15:56.586 { 00:15:56.586 "subsystem": "nbd", 00:15:56.586 "config": [] 00:15:56.586 } 00:15:56.586 ] 00:15:56.586 }' 00:15:56.586 [2024-07-15 11:34:33.927730] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:15:56.586 [2024-07-15 11:34:33.927879] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85251 ] 00:15:56.845 [2024-07-15 11:34:34.080125] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:56.845 [2024-07-15 11:34:34.149159] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:56.845 [2024-07-15 11:34:34.287473] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:15:57.818 11:34:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:57.818 11:34:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:15:57.818 11:34:34 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:15:57.818 11:34:34 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # jq -r '.[].name' 00:15:58.076 11:34:35 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:58.076 11:34:35 nvmf_tcp.nvmf_tls -- target/tls.sh@276 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:15:58.076 Running I/O for 1 seconds... 00:15:59.009 00:15:59.009 Latency(us) 00:15:59.009 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:59.009 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:15:59.009 Verification LBA range: start 0x0 length 0x2000 00:15:59.009 nvme0n1 : 1.03 3546.72 13.85 0.00 0.00 35533.27 6315.29 24903.68 00:15:59.009 =================================================================================================================== 00:15:59.009 Total : 3546.72 13.85 0.00 0.00 35533.27 6315.29 24903.68 00:15:59.009 0 00:15:59.267 11:34:36 nvmf_tcp.nvmf_tls -- target/tls.sh@278 -- # trap - SIGINT SIGTERM EXIT 00:15:59.267 11:34:36 nvmf_tcp.nvmf_tls -- target/tls.sh@279 -- # cleanup 00:15:59.267 11:34:36 nvmf_tcp.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:15:59.267 11:34:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@806 -- # type=--id 00:15:59.267 11:34:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@807 -- # id=0 00:15:59.267 11:34:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:15:59.267 11:34:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:15:59.267 11:34:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:15:59.267 11:34:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:15:59.267 11:34:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@818 -- # for n in $shm_files 00:15:59.267 11:34:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:15:59.267 nvmf_trace.0 00:15:59.267 11:34:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@821 -- # return 0 00:15:59.267 11:34:36 nvmf_tcp.nvmf_tls -- target/tls.sh@16 -- # killprocess 85251 00:15:59.267 11:34:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 85251 ']' 00:15:59.267 11:34:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 85251 00:15:59.267 11:34:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:15:59.267 11:34:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:59.267 11:34:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 85251 00:15:59.267 11:34:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:15:59.267 killing process with pid 85251 00:15:59.267 Received shutdown signal, test time was about 1.000000 seconds 00:15:59.267 00:15:59.267 Latency(us) 00:15:59.267 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:59.267 =================================================================================================================== 00:15:59.267 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:59.267 11:34:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:15:59.267 11:34:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 85251' 00:15:59.267 11:34:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 85251 00:15:59.267 11:34:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 85251 00:15:59.526 11:34:36 nvmf_tcp.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:15:59.526 11:34:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:59.526 11:34:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@117 -- # sync 00:15:59.526 11:34:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:59.526 11:34:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@120 -- # set +e 00:15:59.526 11:34:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:59.526 11:34:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:59.526 rmmod nvme_tcp 00:15:59.526 rmmod nvme_fabrics 00:15:59.526 rmmod nvme_keyring 00:15:59.526 11:34:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:59.526 11:34:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@124 -- # set -e 00:15:59.526 11:34:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@125 -- # return 0 00:15:59.526 11:34:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@489 -- # '[' -n 85207 ']' 00:15:59.526 11:34:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@490 -- # killprocess 85207 00:15:59.526 11:34:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 85207 ']' 00:15:59.526 11:34:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 85207 00:15:59.526 11:34:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:15:59.526 11:34:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:59.526 11:34:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 85207 00:15:59.526 11:34:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:15:59.526 killing process with pid 85207 00:15:59.526 11:34:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:15:59.526 11:34:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 85207' 00:15:59.526 11:34:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 85207 00:15:59.526 11:34:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 85207 00:15:59.785 11:34:37 nvmf_tcp.nvmf_tls -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:59.785 11:34:37 nvmf_tcp.nvmf_tls -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:59.785 11:34:37 nvmf_tcp.nvmf_tls -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:59.785 11:34:37 nvmf_tcp.nvmf_tls -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:59.785 11:34:37 nvmf_tcp.nvmf_tls -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:59.785 11:34:37 nvmf_tcp.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:59.785 11:34:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:59.785 11:34:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:59.785 11:34:37 nvmf_tcp.nvmf_tls -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:15:59.785 11:34:37 nvmf_tcp.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.O0n7R8f0NM /tmp/tmp.Mi1Q8cSztT /tmp/tmp.6tWS54vE6Q 00:15:59.785 00:15:59.785 real 1m24.542s 00:15:59.785 user 2m16.092s 00:15:59.785 sys 0m27.079s 00:15:59.785 11:34:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:59.785 ************************************ 00:15:59.785 11:34:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:59.785 END TEST nvmf_tls 00:15:59.785 ************************************ 00:15:59.785 11:34:37 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:15:59.785 11:34:37 nvmf_tcp -- nvmf/nvmf.sh@62 -- # run_test nvmf_fips /home/vagrant/spdk_repo/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:15:59.785 11:34:37 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:15:59.785 11:34:37 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:59.785 11:34:37 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:59.785 ************************************ 00:15:59.785 START TEST nvmf_fips 00:15:59.785 ************************************ 00:15:59.785 11:34:37 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:15:59.785 * Looking for test storage... 00:15:59.785 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/fips 00:15:59.785 11:34:37 nvmf_tcp.nvmf_fips -- fips/fips.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:59.785 11:34:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:15:59.785 11:34:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:59.785 11:34:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:59.785 11:34:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:59.785 11:34:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:59.785 11:34:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:59.785 11:34:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:59.785 11:34:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:59.785 11:34:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:59.785 11:34:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:59.785 11:34:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:59.785 11:34:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:891080d4-f96c-4735-b9e2-e3ce9892e421 00:15:59.785 11:34:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=891080d4-f96c-4735-b9e2-e3ce9892e421 00:15:59.785 11:34:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:59.785 11:34:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:59.785 11:34:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:59.785 11:34:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:59.785 11:34:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:59.785 11:34:37 nvmf_tcp.nvmf_fips -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:59.785 11:34:37 nvmf_tcp.nvmf_fips -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:59.785 11:34:37 nvmf_tcp.nvmf_fips -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:59.785 11:34:37 nvmf_tcp.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:59.785 11:34:37 nvmf_tcp.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:59.785 11:34:37 nvmf_tcp.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:59.785 11:34:37 nvmf_tcp.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:15:59.785 11:34:37 nvmf_tcp.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:59.785 11:34:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@47 -- # : 0 00:15:59.785 11:34:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:59.785 11:34:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:59.785 11:34:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:59.785 11:34:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:59.785 11:34:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:59.785 11:34:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:59.785 11:34:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:59.785 11:34:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:59.785 11:34:37 nvmf_tcp.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:59.785 11:34:37 nvmf_tcp.nvmf_fips -- fips/fips.sh@89 -- # check_openssl_version 00:15:59.785 11:34:37 nvmf_tcp.nvmf_fips -- fips/fips.sh@83 -- # local target=3.0.0 00:15:59.785 11:34:37 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # awk '{print $2}' 00:15:59.785 11:34:37 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # openssl version 00:15:59.785 11:34:37 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # ge 3.0.9 3.0.0 00:15:59.785 11:34:37 nvmf_tcp.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 3.0.9 '>=' 3.0.0 00:15:59.785 11:34:37 nvmf_tcp.nvmf_fips -- scripts/common.sh@330 -- # local ver1 ver1_l 00:15:59.785 11:34:37 nvmf_tcp.nvmf_fips -- scripts/common.sh@331 -- # local ver2 ver2_l 00:15:59.785 11:34:37 nvmf_tcp.nvmf_fips -- scripts/common.sh@333 -- # IFS=.-: 00:15:59.785 11:34:37 nvmf_tcp.nvmf_fips -- scripts/common.sh@333 -- # read -ra ver1 00:15:59.785 11:34:37 nvmf_tcp.nvmf_fips -- scripts/common.sh@334 -- # IFS=.-: 00:15:59.785 11:34:37 nvmf_tcp.nvmf_fips -- scripts/common.sh@334 -- # read -ra ver2 00:15:59.785 11:34:37 nvmf_tcp.nvmf_fips -- scripts/common.sh@335 -- # local 'op=>=' 00:15:59.785 11:34:37 nvmf_tcp.nvmf_fips -- scripts/common.sh@337 -- # ver1_l=3 00:15:59.785 11:34:37 nvmf_tcp.nvmf_fips -- scripts/common.sh@338 -- # ver2_l=3 00:15:59.785 11:34:37 nvmf_tcp.nvmf_fips -- scripts/common.sh@340 -- # local lt=0 gt=0 eq=0 v 00:15:59.785 11:34:37 nvmf_tcp.nvmf_fips -- scripts/common.sh@341 -- # case "$op" in 00:15:59.785 11:34:37 nvmf_tcp.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:15:59.785 11:34:37 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v = 0 )) 00:15:59.785 11:34:37 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:59.785 11:34:37 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 3 00:15:59.785 11:34:37 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:15:59.785 11:34:37 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:15:59.785 11:34:37 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:15:59.785 11:34:37 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=3 00:15:59.785 11:34:37 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 3 00:15:59.785 11:34:37 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:15:59.785 11:34:37 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:15:59.785 11:34:37 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:15:59.785 11:34:37 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=3 00:15:59.785 11:34:37 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:15:59.785 11:34:37 nvmf_tcp.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:15:59.785 11:34:37 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:15:59.785 11:34:37 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:59.785 11:34:37 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 0 00:15:59.785 11:34:37 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:15:59.785 11:34:37 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:15:59.785 11:34:37 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:15:59.785 11:34:37 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=0 00:15:59.785 11:34:37 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:16:00.044 11:34:37 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:16:00.044 11:34:37 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:16:00.044 11:34:37 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:16:00.044 11:34:37 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:16:00.044 11:34:37 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:16:00.044 11:34:37 nvmf_tcp.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:16:00.044 11:34:37 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:16:00.044 11:34:37 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:00.044 11:34:37 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 9 00:16:00.044 11:34:37 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=9 00:16:00.044 11:34:37 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 9 =~ ^[0-9]+$ ]] 00:16:00.044 11:34:37 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 9 00:16:00.044 11:34:37 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=9 00:16:00.044 11:34:37 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:16:00.044 11:34:37 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:16:00.044 11:34:37 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:16:00.044 11:34:37 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:16:00.044 11:34:37 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:16:00.044 11:34:37 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:16:00.044 11:34:37 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # return 0 00:16:00.044 11:34:37 nvmf_tcp.nvmf_fips -- fips/fips.sh@95 -- # openssl info -modulesdir 00:16:00.044 11:34:37 nvmf_tcp.nvmf_fips -- fips/fips.sh@95 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:16:00.044 11:34:37 nvmf_tcp.nvmf_fips -- fips/fips.sh@100 -- # openssl fipsinstall -help 00:16:00.044 11:34:37 nvmf_tcp.nvmf_fips -- fips/fips.sh@100 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:16:00.044 11:34:37 nvmf_tcp.nvmf_fips -- fips/fips.sh@101 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:16:00.044 11:34:37 nvmf_tcp.nvmf_fips -- fips/fips.sh@104 -- # export callback=build_openssl_config 00:16:00.044 11:34:37 nvmf_tcp.nvmf_fips -- fips/fips.sh@104 -- # callback=build_openssl_config 00:16:00.044 11:34:37 nvmf_tcp.nvmf_fips -- fips/fips.sh@113 -- # build_openssl_config 00:16:00.044 11:34:37 nvmf_tcp.nvmf_fips -- fips/fips.sh@37 -- # cat 00:16:00.044 11:34:37 nvmf_tcp.nvmf_fips -- fips/fips.sh@57 -- # [[ ! -t 0 ]] 00:16:00.044 11:34:37 nvmf_tcp.nvmf_fips -- fips/fips.sh@58 -- # cat - 00:16:00.044 11:34:37 nvmf_tcp.nvmf_fips -- fips/fips.sh@114 -- # export OPENSSL_CONF=spdk_fips.conf 00:16:00.044 11:34:37 nvmf_tcp.nvmf_fips -- fips/fips.sh@114 -- # OPENSSL_CONF=spdk_fips.conf 00:16:00.044 11:34:37 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # mapfile -t providers 00:16:00.044 11:34:37 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # openssl list -providers 00:16:00.044 11:34:37 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # grep name 00:16:00.044 11:34:37 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # (( 2 != 2 )) 00:16:00.044 11:34:37 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # [[ name: openssl base provider != *base* ]] 00:16:00.044 11:34:37 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:16:00.044 11:34:37 nvmf_tcp.nvmf_fips -- fips/fips.sh@127 -- # NOT openssl md5 /dev/fd/62 00:16:00.044 11:34:37 nvmf_tcp.nvmf_fips -- fips/fips.sh@127 -- # : 00:16:00.044 11:34:37 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@648 -- # local es=0 00:16:00.044 11:34:37 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@650 -- # valid_exec_arg openssl md5 /dev/fd/62 00:16:00.044 11:34:37 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@636 -- # local arg=openssl 00:16:00.044 11:34:37 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:00.044 11:34:37 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # type -t openssl 00:16:00.044 11:34:37 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:00.044 11:34:37 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # type -P openssl 00:16:00.044 11:34:37 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:00.044 11:34:37 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # arg=/usr/bin/openssl 00:16:00.044 11:34:37 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # [[ -x /usr/bin/openssl ]] 00:16:00.044 11:34:37 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@651 -- # openssl md5 /dev/fd/62 00:16:00.044 Error setting digest 00:16:00.044 00C2063E2C7F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:373:Global default library context, Algorithm (MD5 : 97), Properties () 00:16:00.044 00C2063E2C7F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:254: 00:16:00.044 11:34:37 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@651 -- # es=1 00:16:00.044 11:34:37 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:16:00.044 11:34:37 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:16:00.044 11:34:37 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:16:00.044 11:34:37 nvmf_tcp.nvmf_fips -- fips/fips.sh@130 -- # nvmftestinit 00:16:00.044 11:34:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:00.044 11:34:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:00.044 11:34:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:00.044 11:34:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:00.044 11:34:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:00.044 11:34:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:00.044 11:34:37 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:00.044 11:34:37 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:00.044 11:34:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:16:00.044 11:34:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:16:00.044 11:34:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:16:00.044 11:34:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:16:00.044 11:34:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:16:00.044 11:34:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@432 -- # nvmf_veth_init 00:16:00.044 11:34:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:00.044 11:34:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:00.044 11:34:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:16:00.044 11:34:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:16:00.044 11:34:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:00.044 11:34:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:00.044 11:34:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:00.044 11:34:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:00.044 11:34:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:00.044 11:34:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:00.044 11:34:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:00.044 11:34:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:00.044 11:34:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:16:00.044 11:34:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:16:00.044 Cannot find device "nvmf_tgt_br" 00:16:00.044 11:34:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@155 -- # true 00:16:00.044 11:34:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:16:00.044 Cannot find device "nvmf_tgt_br2" 00:16:00.044 11:34:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@156 -- # true 00:16:00.044 11:34:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:16:00.044 11:34:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:16:00.044 Cannot find device "nvmf_tgt_br" 00:16:00.044 11:34:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@158 -- # true 00:16:00.044 11:34:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:16:00.044 Cannot find device "nvmf_tgt_br2" 00:16:00.044 11:34:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@159 -- # true 00:16:00.044 11:34:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:16:00.303 11:34:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:16:00.303 11:34:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:00.303 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:00.303 11:34:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@162 -- # true 00:16:00.303 11:34:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:00.303 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:00.303 11:34:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@163 -- # true 00:16:00.303 11:34:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:16:00.303 11:34:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:00.303 11:34:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:00.303 11:34:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:00.303 11:34:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:00.303 11:34:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:00.303 11:34:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:00.303 11:34:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:16:00.303 11:34:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:16:00.303 11:34:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:16:00.303 11:34:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:16:00.303 11:34:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:16:00.303 11:34:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:16:00.303 11:34:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:00.303 11:34:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:00.303 11:34:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:00.303 11:34:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:16:00.303 11:34:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:16:00.303 11:34:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:16:00.303 11:34:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:00.303 11:34:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:00.303 11:34:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:00.303 11:34:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:00.303 11:34:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:16:00.303 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:00.303 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.104 ms 00:16:00.303 00:16:00.303 --- 10.0.0.2 ping statistics --- 00:16:00.303 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:00.303 rtt min/avg/max/mdev = 0.104/0.104/0.104/0.000 ms 00:16:00.303 11:34:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:16:00.303 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:00.303 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.072 ms 00:16:00.303 00:16:00.303 --- 10.0.0.3 ping statistics --- 00:16:00.303 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:00.303 rtt min/avg/max/mdev = 0.072/0.072/0.072/0.000 ms 00:16:00.303 11:34:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:00.303 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:00.303 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.057 ms 00:16:00.303 00:16:00.303 --- 10.0.0.1 ping statistics --- 00:16:00.303 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:00.303 rtt min/avg/max/mdev = 0.057/0.057/0.057/0.000 ms 00:16:00.303 11:34:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:00.303 11:34:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@433 -- # return 0 00:16:00.303 11:34:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:00.303 11:34:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:00.304 11:34:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:00.304 11:34:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:00.304 11:34:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:00.304 11:34:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:00.304 11:34:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:00.562 11:34:37 nvmf_tcp.nvmf_fips -- fips/fips.sh@131 -- # nvmfappstart -m 0x2 00:16:00.562 11:34:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:00.562 11:34:37 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:00.562 11:34:37 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:16:00.562 11:34:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@481 -- # nvmfpid=85544 00:16:00.562 11:34:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:16:00.562 11:34:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@482 -- # waitforlisten 85544 00:16:00.562 11:34:37 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@829 -- # '[' -z 85544 ']' 00:16:00.562 11:34:37 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:00.562 11:34:37 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:00.562 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:00.562 11:34:37 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:00.562 11:34:37 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:00.562 11:34:37 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:16:00.562 [2024-07-15 11:34:37.912929] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:16:00.562 [2024-07-15 11:34:37.913031] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:00.821 [2024-07-15 11:34:38.048887] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:00.821 [2024-07-15 11:34:38.138432] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:00.821 [2024-07-15 11:34:38.138895] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:00.821 [2024-07-15 11:34:38.139038] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:00.821 [2024-07-15 11:34:38.139160] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:00.821 [2024-07-15 11:34:38.139282] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:00.821 [2024-07-15 11:34:38.139458] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:16:01.755 11:34:38 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:01.755 11:34:38 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@862 -- # return 0 00:16:01.755 11:34:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:01.755 11:34:38 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:01.755 11:34:38 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:16:01.755 11:34:39 nvmf_tcp.nvmf_fips -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:01.755 11:34:39 nvmf_tcp.nvmf_fips -- fips/fips.sh@133 -- # trap cleanup EXIT 00:16:01.755 11:34:39 nvmf_tcp.nvmf_fips -- fips/fips.sh@136 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:16:01.755 11:34:39 nvmf_tcp.nvmf_fips -- fips/fips.sh@137 -- # key_path=/home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:16:01.755 11:34:39 nvmf_tcp.nvmf_fips -- fips/fips.sh@138 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:16:01.755 11:34:39 nvmf_tcp.nvmf_fips -- fips/fips.sh@139 -- # chmod 0600 /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:16:01.755 11:34:39 nvmf_tcp.nvmf_fips -- fips/fips.sh@141 -- # setup_nvmf_tgt_conf /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:16:01.755 11:34:39 nvmf_tcp.nvmf_fips -- fips/fips.sh@22 -- # local key=/home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:16:01.755 11:34:39 nvmf_tcp.nvmf_fips -- fips/fips.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:02.013 [2024-07-15 11:34:39.255069] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:02.013 [2024-07-15 11:34:39.271031] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:16:02.013 [2024-07-15 11:34:39.271259] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:02.013 [2024-07-15 11:34:39.298104] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:16:02.013 malloc0 00:16:02.013 11:34:39 nvmf_tcp.nvmf_fips -- fips/fips.sh@144 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:16:02.013 11:34:39 nvmf_tcp.nvmf_fips -- fips/fips.sh@147 -- # bdevperf_pid=85607 00:16:02.013 11:34:39 nvmf_tcp.nvmf_fips -- fips/fips.sh@145 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:16:02.013 11:34:39 nvmf_tcp.nvmf_fips -- fips/fips.sh@148 -- # waitforlisten 85607 /var/tmp/bdevperf.sock 00:16:02.013 11:34:39 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@829 -- # '[' -z 85607 ']' 00:16:02.013 11:34:39 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:02.013 11:34:39 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:02.013 11:34:39 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:02.013 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:02.013 11:34:39 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:02.013 11:34:39 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:16:02.013 [2024-07-15 11:34:39.444468] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:16:02.013 [2024-07-15 11:34:39.444644] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85607 ] 00:16:02.271 [2024-07-15 11:34:39.584939] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:02.271 [2024-07-15 11:34:39.657356] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:16:03.255 11:34:40 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:03.255 11:34:40 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@862 -- # return 0 00:16:03.255 11:34:40 nvmf_tcp.nvmf_fips -- fips/fips.sh@150 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:16:03.255 [2024-07-15 11:34:40.668335] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:16:03.255 [2024-07-15 11:34:40.668496] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:16:03.514 TLSTESTn1 00:16:03.514 11:34:40 nvmf_tcp.nvmf_fips -- fips/fips.sh@154 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:16:03.514 Running I/O for 10 seconds... 00:16:13.477 00:16:13.477 Latency(us) 00:16:13.477 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:13.477 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:16:13.477 Verification LBA range: start 0x0 length 0x2000 00:16:13.477 TLSTESTn1 : 10.02 3802.53 14.85 0.00 0.00 33597.20 6494.02 30980.65 00:16:13.477 =================================================================================================================== 00:16:13.477 Total : 3802.53 14.85 0.00 0.00 33597.20 6494.02 30980.65 00:16:13.477 0 00:16:13.477 11:34:50 nvmf_tcp.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:16:13.477 11:34:50 nvmf_tcp.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:16:13.477 11:34:50 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@806 -- # type=--id 00:16:13.477 11:34:50 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@807 -- # id=0 00:16:13.477 11:34:50 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:16:13.477 11:34:50 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:16:13.477 11:34:50 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:16:13.477 11:34:50 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:16:13.477 11:34:50 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@818 -- # for n in $shm_files 00:16:13.477 11:34:50 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:16:13.477 nvmf_trace.0 00:16:13.735 11:34:51 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@821 -- # return 0 00:16:13.735 11:34:51 nvmf_tcp.nvmf_fips -- fips/fips.sh@16 -- # killprocess 85607 00:16:13.735 11:34:51 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@948 -- # '[' -z 85607 ']' 00:16:13.735 11:34:51 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@952 -- # kill -0 85607 00:16:13.735 11:34:51 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # uname 00:16:13.735 11:34:51 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:13.735 11:34:51 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 85607 00:16:13.735 11:34:51 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:16:13.735 11:34:51 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:16:13.735 killing process with pid 85607 00:16:13.735 11:34:51 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@966 -- # echo 'killing process with pid 85607' 00:16:13.735 Received shutdown signal, test time was about 10.000000 seconds 00:16:13.735 00:16:13.735 Latency(us) 00:16:13.735 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:13.735 =================================================================================================================== 00:16:13.735 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:13.735 11:34:51 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@967 -- # kill 85607 00:16:13.735 [2024-07-15 11:34:51.066294] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:16:13.735 11:34:51 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@972 -- # wait 85607 00:16:13.992 11:34:51 nvmf_tcp.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:16:13.992 11:34:51 nvmf_tcp.nvmf_fips -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:13.992 11:34:51 nvmf_tcp.nvmf_fips -- nvmf/common.sh@117 -- # sync 00:16:13.992 11:34:51 nvmf_tcp.nvmf_fips -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:13.992 11:34:51 nvmf_tcp.nvmf_fips -- nvmf/common.sh@120 -- # set +e 00:16:13.992 11:34:51 nvmf_tcp.nvmf_fips -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:13.992 11:34:51 nvmf_tcp.nvmf_fips -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:13.992 rmmod nvme_tcp 00:16:13.992 rmmod nvme_fabrics 00:16:13.992 rmmod nvme_keyring 00:16:13.992 11:34:51 nvmf_tcp.nvmf_fips -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:13.992 11:34:51 nvmf_tcp.nvmf_fips -- nvmf/common.sh@124 -- # set -e 00:16:13.992 11:34:51 nvmf_tcp.nvmf_fips -- nvmf/common.sh@125 -- # return 0 00:16:13.992 11:34:51 nvmf_tcp.nvmf_fips -- nvmf/common.sh@489 -- # '[' -n 85544 ']' 00:16:13.992 11:34:51 nvmf_tcp.nvmf_fips -- nvmf/common.sh@490 -- # killprocess 85544 00:16:13.992 11:34:51 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@948 -- # '[' -z 85544 ']' 00:16:13.992 11:34:51 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@952 -- # kill -0 85544 00:16:13.992 11:34:51 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # uname 00:16:13.992 11:34:51 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:13.992 11:34:51 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 85544 00:16:13.992 11:34:51 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:16:13.992 11:34:51 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:16:13.992 killing process with pid 85544 00:16:13.992 11:34:51 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@966 -- # echo 'killing process with pid 85544' 00:16:13.992 11:34:51 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@967 -- # kill 85544 00:16:13.992 [2024-07-15 11:34:51.328140] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:16:13.992 11:34:51 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@972 -- # wait 85544 00:16:14.250 11:34:51 nvmf_tcp.nvmf_fips -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:14.250 11:34:51 nvmf_tcp.nvmf_fips -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:14.250 11:34:51 nvmf_tcp.nvmf_fips -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:14.250 11:34:51 nvmf_tcp.nvmf_fips -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:14.250 11:34:51 nvmf_tcp.nvmf_fips -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:14.250 11:34:51 nvmf_tcp.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:14.250 11:34:51 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:14.250 11:34:51 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:14.250 11:34:51 nvmf_tcp.nvmf_fips -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:16:14.250 11:34:51 nvmf_tcp.nvmf_fips -- fips/fips.sh@18 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:16:14.250 00:16:14.250 real 0m14.398s 00:16:14.250 user 0m19.954s 00:16:14.250 sys 0m5.600s 00:16:14.250 11:34:51 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:14.250 11:34:51 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:16:14.250 ************************************ 00:16:14.250 END TEST nvmf_fips 00:16:14.250 ************************************ 00:16:14.250 11:34:51 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:16:14.250 11:34:51 nvmf_tcp -- nvmf/nvmf.sh@65 -- # '[' 0 -eq 1 ']' 00:16:14.250 11:34:51 nvmf_tcp -- nvmf/nvmf.sh@71 -- # [[ virt == phy ]] 00:16:14.250 11:34:51 nvmf_tcp -- nvmf/nvmf.sh@86 -- # timing_exit target 00:16:14.250 11:34:51 nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:14.250 11:34:51 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:14.250 11:34:51 nvmf_tcp -- nvmf/nvmf.sh@88 -- # timing_enter host 00:16:14.250 11:34:51 nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:14.250 11:34:51 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:14.250 11:34:51 nvmf_tcp -- nvmf/nvmf.sh@90 -- # [[ 0 -eq 0 ]] 00:16:14.250 11:34:51 nvmf_tcp -- nvmf/nvmf.sh@91 -- # run_test nvmf_multicontroller /home/vagrant/spdk_repo/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:16:14.250 11:34:51 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:16:14.250 11:34:51 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:14.250 11:34:51 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:14.250 ************************************ 00:16:14.250 START TEST nvmf_multicontroller 00:16:14.250 ************************************ 00:16:14.250 11:34:51 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:16:14.250 * Looking for test storage... 00:16:14.250 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:16:14.250 11:34:51 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:14.250 11:34:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:16:14.250 11:34:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:14.250 11:34:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:14.250 11:34:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:14.250 11:34:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:14.250 11:34:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:14.250 11:34:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:14.250 11:34:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:14.250 11:34:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:14.250 11:34:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:14.250 11:34:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:14.250 11:34:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:891080d4-f96c-4735-b9e2-e3ce9892e421 00:16:14.251 11:34:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=891080d4-f96c-4735-b9e2-e3ce9892e421 00:16:14.251 11:34:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:14.251 11:34:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:14.251 11:34:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:14.251 11:34:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:14.251 11:34:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:14.251 11:34:51 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:14.251 11:34:51 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:14.251 11:34:51 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:14.251 11:34:51 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:14.251 11:34:51 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:14.251 11:34:51 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:14.251 11:34:51 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:16:14.251 11:34:51 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:14.251 11:34:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@47 -- # : 0 00:16:14.251 11:34:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:14.251 11:34:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:14.251 11:34:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:14.251 11:34:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:14.251 11:34:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:14.251 11:34:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:14.251 11:34:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:14.251 11:34:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:14.251 11:34:51 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:14.251 11:34:51 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:14.251 11:34:51 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:16:14.251 11:34:51 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:16:14.251 11:34:51 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:16:14.251 11:34:51 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:16:14.251 11:34:51 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@23 -- # nvmftestinit 00:16:14.251 11:34:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:14.251 11:34:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:14.251 11:34:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:14.251 11:34:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:14.251 11:34:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:14.251 11:34:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:14.251 11:34:51 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:14.251 11:34:51 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:14.251 11:34:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:16:14.251 11:34:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:16:14.251 11:34:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:16:14.251 11:34:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:16:14.251 11:34:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:16:14.251 11:34:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@432 -- # nvmf_veth_init 00:16:14.251 11:34:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:14.251 11:34:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:14.251 11:34:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:16:14.251 11:34:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:16:14.251 11:34:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:14.251 11:34:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:14.251 11:34:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:14.251 11:34:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:14.251 11:34:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:14.251 11:34:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:14.251 11:34:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:14.251 11:34:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:14.251 11:34:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:16:14.508 11:34:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:16:14.508 Cannot find device "nvmf_tgt_br" 00:16:14.508 11:34:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@155 -- # true 00:16:14.508 11:34:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:16:14.508 Cannot find device "nvmf_tgt_br2" 00:16:14.508 11:34:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@156 -- # true 00:16:14.508 11:34:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:16:14.508 11:34:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:16:14.508 Cannot find device "nvmf_tgt_br" 00:16:14.508 11:34:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@158 -- # true 00:16:14.508 11:34:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:16:14.508 Cannot find device "nvmf_tgt_br2" 00:16:14.508 11:34:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@159 -- # true 00:16:14.508 11:34:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:16:14.508 11:34:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:16:14.508 11:34:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:14.508 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:14.508 11:34:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@162 -- # true 00:16:14.508 11:34:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:14.508 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:14.508 11:34:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@163 -- # true 00:16:14.508 11:34:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:16:14.508 11:34:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:14.508 11:34:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:14.508 11:34:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:14.508 11:34:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:14.508 11:34:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:14.508 11:34:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:14.508 11:34:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:16:14.508 11:34:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:16:14.508 11:34:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:16:14.508 11:34:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:16:14.508 11:34:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:16:14.508 11:34:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:16:14.508 11:34:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:14.508 11:34:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:14.508 11:34:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:14.508 11:34:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:16:14.508 11:34:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:16:14.765 11:34:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:16:14.765 11:34:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:14.765 11:34:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:14.765 11:34:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:14.765 11:34:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:14.765 11:34:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:16:14.765 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:14.765 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.103 ms 00:16:14.765 00:16:14.765 --- 10.0.0.2 ping statistics --- 00:16:14.765 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:14.765 rtt min/avg/max/mdev = 0.103/0.103/0.103/0.000 ms 00:16:14.765 11:34:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:16:14.765 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:14.765 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.051 ms 00:16:14.765 00:16:14.765 --- 10.0.0.3 ping statistics --- 00:16:14.765 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:14.765 rtt min/avg/max/mdev = 0.051/0.051/0.051/0.000 ms 00:16:14.765 11:34:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:14.765 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:14.765 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.039 ms 00:16:14.765 00:16:14.765 --- 10.0.0.1 ping statistics --- 00:16:14.765 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:14.765 rtt min/avg/max/mdev = 0.039/0.039/0.039/0.000 ms 00:16:14.765 11:34:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:14.766 11:34:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@433 -- # return 0 00:16:14.766 11:34:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:14.766 11:34:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:14.766 11:34:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:14.766 11:34:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:14.766 11:34:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:14.766 11:34:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:14.766 11:34:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:14.766 11:34:52 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:16:14.766 11:34:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:14.766 11:34:52 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:14.766 11:34:52 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:16:14.766 11:34:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:16:14.766 11:34:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@481 -- # nvmfpid=85965 00:16:14.766 11:34:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@482 -- # waitforlisten 85965 00:16:14.766 11:34:52 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@829 -- # '[' -z 85965 ']' 00:16:14.766 11:34:52 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:14.766 11:34:52 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:14.766 11:34:52 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:14.766 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:14.766 11:34:52 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:14.766 11:34:52 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:16:14.766 [2024-07-15 11:34:52.141405] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:16:14.766 [2024-07-15 11:34:52.141576] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:15.023 [2024-07-15 11:34:52.277599] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:16:15.023 [2024-07-15 11:34:52.364660] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:15.023 [2024-07-15 11:34:52.364734] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:15.023 [2024-07-15 11:34:52.364750] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:15.023 [2024-07-15 11:34:52.364763] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:15.023 [2024-07-15 11:34:52.364774] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:15.023 [2024-07-15 11:34:52.364865] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:16:15.023 [2024-07-15 11:34:52.364958] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:16:15.023 [2024-07-15 11:34:52.365260] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:16:15.601 11:34:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:15.601 11:34:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@862 -- # return 0 00:16:15.601 11:34:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:15.601 11:34:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:15.601 11:34:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:16:15.858 11:34:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:15.858 11:34:53 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:15.858 11:34:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:15.858 11:34:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:16:15.858 [2024-07-15 11:34:53.098046] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:15.858 11:34:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:15.858 11:34:53 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:16:15.858 11:34:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:15.858 11:34:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:16:15.858 Malloc0 00:16:15.858 11:34:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:15.858 11:34:53 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:16:15.858 11:34:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:15.858 11:34:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:16:15.858 11:34:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:15.858 11:34:53 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:16:15.858 11:34:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:15.858 11:34:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:16:15.858 11:34:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:15.858 11:34:53 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:15.858 11:34:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:15.858 11:34:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:16:15.858 [2024-07-15 11:34:53.151232] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:15.858 11:34:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:15.858 11:34:53 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:16:15.858 11:34:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:15.858 11:34:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:16:15.858 [2024-07-15 11:34:53.163182] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:16:15.858 11:34:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:15.859 11:34:53 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:16:15.859 11:34:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:15.859 11:34:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:16:15.859 Malloc1 00:16:15.859 11:34:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:15.859 11:34:53 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:16:15.859 11:34:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:15.859 11:34:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:16:15.859 11:34:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:15.859 11:34:53 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:16:15.859 11:34:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:15.859 11:34:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:16:15.859 11:34:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:15.859 11:34:53 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:16:15.859 11:34:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:15.859 11:34:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:16:15.859 11:34:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:15.859 11:34:53 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:16:15.859 11:34:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:15.859 11:34:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:16:15.859 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:15.859 11:34:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:15.859 11:34:53 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@44 -- # bdevperf_pid=86017 00:16:15.859 11:34:53 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@43 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:16:15.859 11:34:53 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:16:15.859 11:34:53 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@47 -- # waitforlisten 86017 /var/tmp/bdevperf.sock 00:16:15.859 11:34:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@829 -- # '[' -z 86017 ']' 00:16:15.859 11:34:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:15.859 11:34:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:15.859 11:34:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:15.859 11:34:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:15.859 11:34:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:16:17.230 11:34:54 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:17.230 11:34:54 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@862 -- # return 0 00:16:17.230 11:34:54 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:16:17.230 11:34:54 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:17.230 11:34:54 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:16:17.230 NVMe0n1 00:16:17.230 11:34:54 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:17.230 11:34:54 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:16:17.230 11:34:54 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@54 -- # grep -c NVMe 00:16:17.230 11:34:54 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:17.230 11:34:54 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:16:17.230 11:34:54 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:17.230 1 00:16:17.230 11:34:54 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:16:17.230 11:34:54 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:16:17.230 11:34:54 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:16:17.230 11:34:54 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:16:17.230 11:34:54 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:17.230 11:34:54 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:16:17.230 11:34:54 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:17.230 11:34:54 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:16:17.230 11:34:54 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:17.230 11:34:54 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:16:17.230 2024/07/15 11:34:54 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostaddr:10.0.0.2 hostnqn:nqn.2021-09-7.io.spdk:00001 hostsvcid:60000 name:NVMe0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-114 Msg=A controller named NVMe0 already exists with the specified network path 00:16:17.230 request: 00:16:17.230 { 00:16:17.230 "method": "bdev_nvme_attach_controller", 00:16:17.230 "params": { 00:16:17.230 "name": "NVMe0", 00:16:17.230 "trtype": "tcp", 00:16:17.230 "traddr": "10.0.0.2", 00:16:17.230 "adrfam": "ipv4", 00:16:17.230 "trsvcid": "4420", 00:16:17.230 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:17.230 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:16:17.230 "hostaddr": "10.0.0.2", 00:16:17.230 "hostsvcid": "60000", 00:16:17.230 "prchk_reftag": false, 00:16:17.230 "prchk_guard": false, 00:16:17.230 "hdgst": false, 00:16:17.230 "ddgst": false 00:16:17.230 } 00:16:17.230 } 00:16:17.230 Got JSON-RPC error response 00:16:17.230 GoRPCClient: error on JSON-RPC call 00:16:17.230 11:34:54 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:16:17.230 11:34:54 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:16:17.230 11:34:54 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:16:17.230 11:34:54 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:16:17.230 11:34:54 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:16:17.230 11:34:54 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:16:17.230 11:34:54 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:16:17.230 11:34:54 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:16:17.230 11:34:54 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:16:17.230 11:34:54 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:17.230 11:34:54 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:16:17.230 11:34:54 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:17.230 11:34:54 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:16:17.230 11:34:54 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:17.230 11:34:54 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:16:17.230 2024/07/15 11:34:54 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostaddr:10.0.0.2 hostsvcid:60000 name:NVMe0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2016-06.io.spdk:cnode2 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-114 Msg=A controller named NVMe0 already exists with the specified network path 00:16:17.230 request: 00:16:17.230 { 00:16:17.230 "method": "bdev_nvme_attach_controller", 00:16:17.230 "params": { 00:16:17.230 "name": "NVMe0", 00:16:17.230 "trtype": "tcp", 00:16:17.230 "traddr": "10.0.0.2", 00:16:17.231 "adrfam": "ipv4", 00:16:17.231 "trsvcid": "4420", 00:16:17.231 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:16:17.231 "hostaddr": "10.0.0.2", 00:16:17.231 "hostsvcid": "60000", 00:16:17.231 "prchk_reftag": false, 00:16:17.231 "prchk_guard": false, 00:16:17.231 "hdgst": false, 00:16:17.231 "ddgst": false 00:16:17.231 } 00:16:17.231 } 00:16:17.231 Got JSON-RPC error response 00:16:17.231 GoRPCClient: error on JSON-RPC call 00:16:17.231 11:34:54 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:16:17.231 11:34:54 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:16:17.231 11:34:54 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:16:17.231 11:34:54 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:16:17.231 11:34:54 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:16:17.231 11:34:54 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:16:17.231 11:34:54 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:16:17.231 11:34:54 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:16:17.231 11:34:54 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:16:17.231 11:34:54 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:17.231 11:34:54 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:16:17.231 11:34:54 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:17.231 11:34:54 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:16:17.231 11:34:54 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:17.231 11:34:54 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:16:17.231 2024/07/15 11:34:54 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostaddr:10.0.0.2 hostsvcid:60000 multipath:disable name:NVMe0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-114 Msg=A controller named NVMe0 already exists and multipath is disabled 00:16:17.231 request: 00:16:17.231 { 00:16:17.231 "method": "bdev_nvme_attach_controller", 00:16:17.231 "params": { 00:16:17.231 "name": "NVMe0", 00:16:17.231 "trtype": "tcp", 00:16:17.231 "traddr": "10.0.0.2", 00:16:17.231 "adrfam": "ipv4", 00:16:17.231 "trsvcid": "4420", 00:16:17.231 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:17.231 "hostaddr": "10.0.0.2", 00:16:17.231 "hostsvcid": "60000", 00:16:17.231 "prchk_reftag": false, 00:16:17.231 "prchk_guard": false, 00:16:17.231 "hdgst": false, 00:16:17.231 "ddgst": false, 00:16:17.231 "multipath": "disable" 00:16:17.231 } 00:16:17.231 } 00:16:17.231 Got JSON-RPC error response 00:16:17.231 GoRPCClient: error on JSON-RPC call 00:16:17.231 11:34:54 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:16:17.231 11:34:54 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:16:17.231 11:34:54 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:16:17.231 11:34:54 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:16:17.231 11:34:54 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:16:17.231 11:34:54 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:16:17.231 11:34:54 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:16:17.231 11:34:54 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:16:17.231 11:34:54 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:16:17.231 11:34:54 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:17.231 11:34:54 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:16:17.231 11:34:54 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:17.231 11:34:54 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:16:17.231 11:34:54 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:17.231 11:34:54 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:16:17.231 2024/07/15 11:34:54 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostaddr:10.0.0.2 hostsvcid:60000 multipath:failover name:NVMe0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-114 Msg=A controller named NVMe0 already exists with the specified network path 00:16:17.231 request: 00:16:17.231 { 00:16:17.231 "method": "bdev_nvme_attach_controller", 00:16:17.231 "params": { 00:16:17.231 "name": "NVMe0", 00:16:17.231 "trtype": "tcp", 00:16:17.231 "traddr": "10.0.0.2", 00:16:17.231 "adrfam": "ipv4", 00:16:17.231 "trsvcid": "4420", 00:16:17.231 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:17.231 "hostaddr": "10.0.0.2", 00:16:17.231 "hostsvcid": "60000", 00:16:17.231 "prchk_reftag": false, 00:16:17.231 "prchk_guard": false, 00:16:17.231 "hdgst": false, 00:16:17.231 "ddgst": false, 00:16:17.231 "multipath": "failover" 00:16:17.231 } 00:16:17.231 } 00:16:17.231 Got JSON-RPC error response 00:16:17.231 GoRPCClient: error on JSON-RPC call 00:16:17.231 11:34:54 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:16:17.231 11:34:54 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:16:17.231 11:34:54 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:16:17.231 11:34:54 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:16:17.231 11:34:54 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:16:17.231 11:34:54 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:16:17.231 11:34:54 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:17.231 11:34:54 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:16:17.231 00:16:17.231 11:34:54 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:17.231 11:34:54 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:16:17.231 11:34:54 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:17.231 11:34:54 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:16:17.231 11:34:54 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:17.231 11:34:54 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:16:17.231 11:34:54 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:17.231 11:34:54 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:16:17.231 00:16:17.231 11:34:54 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:17.231 11:34:54 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:16:17.231 11:34:54 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:17.231 11:34:54 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:16:17.231 11:34:54 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # grep -c NVMe 00:16:17.231 11:34:54 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:17.231 11:34:54 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:16:17.231 11:34:54 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@95 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:16:18.606 0 00:16:18.606 11:34:55 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:16:18.606 11:34:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:18.606 11:34:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:16:18.606 11:34:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:18.606 11:34:55 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@100 -- # killprocess 86017 00:16:18.606 11:34:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@948 -- # '[' -z 86017 ']' 00:16:18.606 11:34:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@952 -- # kill -0 86017 00:16:18.606 11:34:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@953 -- # uname 00:16:18.606 11:34:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:18.606 11:34:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 86017 00:16:18.606 11:34:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:16:18.606 11:34:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:16:18.606 11:34:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@966 -- # echo 'killing process with pid 86017' 00:16:18.606 killing process with pid 86017 00:16:18.606 11:34:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@967 -- # kill 86017 00:16:18.606 11:34:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@972 -- # wait 86017 00:16:18.606 11:34:56 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@102 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:18.606 11:34:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:18.606 11:34:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:16:18.606 11:34:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:18.606 11:34:56 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@103 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:16:18.606 11:34:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:18.606 11:34:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:16:18.606 11:34:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:18.606 11:34:56 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@105 -- # trap - SIGINT SIGTERM EXIT 00:16:18.606 11:34:56 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@107 -- # pap /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:16:18.606 11:34:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1612 -- # read -r file 00:16:18.606 11:34:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1611 -- # find /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt -type f 00:16:18.606 11:34:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1611 -- # sort -u 00:16:18.606 11:34:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1613 -- # cat 00:16:18.606 --- /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt --- 00:16:18.606 [2024-07-15 11:34:53.287355] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:16:18.606 [2024-07-15 11:34:53.287494] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86017 ] 00:16:18.606 [2024-07-15 11:34:53.418893] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:18.606 [2024-07-15 11:34:53.508107] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:18.606 [2024-07-15 11:34:54.643635] bdev.c:4613:bdev_name_add: *ERROR*: Bdev name d2ca214d-3763-4484-8468-c13f1ca1141c already exists 00:16:18.606 [2024-07-15 11:34:54.643722] bdev.c:7722:bdev_register: *ERROR*: Unable to add uuid:d2ca214d-3763-4484-8468-c13f1ca1141c alias for bdev NVMe1n1 00:16:18.606 [2024-07-15 11:34:54.643742] bdev_nvme.c:4317:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:16:18.606 Running I/O for 1 seconds... 00:16:18.606 00:16:18.607 Latency(us) 00:16:18.607 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:18.607 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:16:18.607 NVMe0n1 : 1.01 18995.45 74.20 0.00 0.00 6726.84 3842.79 14179.61 00:16:18.607 =================================================================================================================== 00:16:18.607 Total : 18995.45 74.20 0.00 0.00 6726.84 3842.79 14179.61 00:16:18.607 Received shutdown signal, test time was about 1.000000 seconds 00:16:18.607 00:16:18.607 Latency(us) 00:16:18.607 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:18.607 =================================================================================================================== 00:16:18.607 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:18.607 --- /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt --- 00:16:18.607 11:34:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1618 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:16:18.607 11:34:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1612 -- # read -r file 00:16:18.607 11:34:56 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@108 -- # nvmftestfini 00:16:18.607 11:34:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:18.607 11:34:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@117 -- # sync 00:16:18.865 11:34:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:18.865 11:34:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@120 -- # set +e 00:16:18.865 11:34:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:18.865 11:34:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:18.865 rmmod nvme_tcp 00:16:18.865 rmmod nvme_fabrics 00:16:18.865 rmmod nvme_keyring 00:16:18.865 11:34:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:18.865 11:34:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@124 -- # set -e 00:16:18.865 11:34:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@125 -- # return 0 00:16:18.865 11:34:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@489 -- # '[' -n 85965 ']' 00:16:18.865 11:34:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@490 -- # killprocess 85965 00:16:18.865 11:34:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@948 -- # '[' -z 85965 ']' 00:16:18.865 11:34:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@952 -- # kill -0 85965 00:16:18.865 11:34:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@953 -- # uname 00:16:18.865 11:34:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:18.865 11:34:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 85965 00:16:18.865 11:34:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:16:18.865 11:34:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:16:18.865 killing process with pid 85965 00:16:18.865 11:34:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@966 -- # echo 'killing process with pid 85965' 00:16:18.865 11:34:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@967 -- # kill 85965 00:16:18.865 11:34:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@972 -- # wait 85965 00:16:19.124 11:34:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:19.124 11:34:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:19.124 11:34:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:19.124 11:34:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:19.124 11:34:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:19.124 11:34:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:19.124 11:34:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:19.124 11:34:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:19.124 11:34:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:16:19.124 00:16:19.124 real 0m4.794s 00:16:19.124 user 0m15.339s 00:16:19.124 sys 0m0.987s 00:16:19.124 11:34:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:19.124 ************************************ 00:16:19.124 END TEST nvmf_multicontroller 00:16:19.124 11:34:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:16:19.124 ************************************ 00:16:19.124 11:34:56 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:16:19.124 11:34:56 nvmf_tcp -- nvmf/nvmf.sh@92 -- # run_test nvmf_aer /home/vagrant/spdk_repo/spdk/test/nvmf/host/aer.sh --transport=tcp 00:16:19.124 11:34:56 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:16:19.124 11:34:56 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:19.124 11:34:56 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:19.124 ************************************ 00:16:19.124 START TEST nvmf_aer 00:16:19.124 ************************************ 00:16:19.124 11:34:56 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/aer.sh --transport=tcp 00:16:19.124 * Looking for test storage... 00:16:19.124 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:16:19.124 11:34:56 nvmf_tcp.nvmf_aer -- host/aer.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:19.124 11:34:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:16:19.124 11:34:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:19.124 11:34:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:19.124 11:34:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:19.124 11:34:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:19.124 11:34:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:19.124 11:34:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:19.124 11:34:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:19.124 11:34:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:19.124 11:34:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:19.124 11:34:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:19.124 11:34:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:891080d4-f96c-4735-b9e2-e3ce9892e421 00:16:19.124 11:34:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=891080d4-f96c-4735-b9e2-e3ce9892e421 00:16:19.124 11:34:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:19.124 11:34:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:19.124 11:34:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:19.124 11:34:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:19.124 11:34:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:19.124 11:34:56 nvmf_tcp.nvmf_aer -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:19.124 11:34:56 nvmf_tcp.nvmf_aer -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:19.124 11:34:56 nvmf_tcp.nvmf_aer -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:19.124 11:34:56 nvmf_tcp.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:19.124 11:34:56 nvmf_tcp.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:19.124 11:34:56 nvmf_tcp.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:19.124 11:34:56 nvmf_tcp.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:16:19.124 11:34:56 nvmf_tcp.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:19.124 11:34:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@47 -- # : 0 00:16:19.124 11:34:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:19.125 11:34:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:19.125 11:34:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:19.125 11:34:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:19.125 11:34:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:19.125 11:34:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:19.125 11:34:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:19.125 11:34:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:19.125 11:34:56 nvmf_tcp.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:16:19.125 11:34:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:19.125 11:34:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:19.125 11:34:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:19.125 11:34:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:19.125 11:34:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:19.125 11:34:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:19.125 11:34:56 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:19.125 11:34:56 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:19.125 11:34:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:16:19.125 11:34:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:16:19.125 11:34:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:16:19.125 11:34:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:16:19.125 11:34:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:16:19.125 11:34:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@432 -- # nvmf_veth_init 00:16:19.125 11:34:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:19.125 11:34:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:19.125 11:34:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:16:19.125 11:34:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:16:19.125 11:34:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:19.125 11:34:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:19.125 11:34:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:19.125 11:34:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:19.125 11:34:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:19.125 11:34:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:19.125 11:34:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:19.125 11:34:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:19.125 11:34:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:16:19.125 11:34:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:16:19.125 Cannot find device "nvmf_tgt_br" 00:16:19.125 11:34:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@155 -- # true 00:16:19.125 11:34:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:16:19.125 Cannot find device "nvmf_tgt_br2" 00:16:19.125 11:34:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@156 -- # true 00:16:19.125 11:34:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:16:19.125 11:34:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:16:19.383 Cannot find device "nvmf_tgt_br" 00:16:19.383 11:34:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@158 -- # true 00:16:19.383 11:34:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:16:19.383 Cannot find device "nvmf_tgt_br2" 00:16:19.383 11:34:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@159 -- # true 00:16:19.383 11:34:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:16:19.383 11:34:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:16:19.383 11:34:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:19.383 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:19.383 11:34:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@162 -- # true 00:16:19.383 11:34:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:19.383 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:19.383 11:34:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@163 -- # true 00:16:19.383 11:34:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:16:19.383 11:34:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:19.383 11:34:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:19.383 11:34:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:19.383 11:34:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:19.383 11:34:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:19.383 11:34:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:19.383 11:34:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:16:19.383 11:34:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:16:19.383 11:34:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:16:19.383 11:34:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:16:19.383 11:34:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:16:19.383 11:34:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:16:19.383 11:34:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:19.383 11:34:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:19.383 11:34:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:19.383 11:34:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:16:19.383 11:34:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:16:19.383 11:34:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:16:19.383 11:34:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:19.383 11:34:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:19.383 11:34:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:19.383 11:34:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:19.383 11:34:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:16:19.642 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:19.642 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.096 ms 00:16:19.642 00:16:19.643 --- 10.0.0.2 ping statistics --- 00:16:19.643 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:19.643 rtt min/avg/max/mdev = 0.096/0.096/0.096/0.000 ms 00:16:19.643 11:34:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:16:19.643 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:19.643 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.059 ms 00:16:19.643 00:16:19.643 --- 10.0.0.3 ping statistics --- 00:16:19.643 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:19.643 rtt min/avg/max/mdev = 0.059/0.059/0.059/0.000 ms 00:16:19.643 11:34:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:19.643 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:19.643 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.038 ms 00:16:19.643 00:16:19.643 --- 10.0.0.1 ping statistics --- 00:16:19.643 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:19.643 rtt min/avg/max/mdev = 0.038/0.038/0.038/0.000 ms 00:16:19.643 11:34:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:19.643 11:34:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@433 -- # return 0 00:16:19.643 11:34:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:19.643 11:34:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:19.643 11:34:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:19.643 11:34:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:19.643 11:34:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:19.643 11:34:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:19.643 11:34:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:19.643 11:34:56 nvmf_tcp.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:16:19.643 11:34:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:19.643 11:34:56 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:19.643 11:34:56 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:16:19.643 11:34:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@481 -- # nvmfpid=86266 00:16:19.643 11:34:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:16:19.643 11:34:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@482 -- # waitforlisten 86266 00:16:19.643 11:34:56 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@829 -- # '[' -z 86266 ']' 00:16:19.643 11:34:56 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:19.643 11:34:56 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:19.643 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:19.643 11:34:56 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:19.643 11:34:56 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:19.643 11:34:56 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:16:19.643 [2024-07-15 11:34:56.963095] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:16:19.643 [2024-07-15 11:34:56.963251] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:19.643 [2024-07-15 11:34:57.105156] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:19.902 [2024-07-15 11:34:57.165060] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:19.902 [2024-07-15 11:34:57.165120] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:19.902 [2024-07-15 11:34:57.165132] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:19.902 [2024-07-15 11:34:57.165143] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:19.902 [2024-07-15 11:34:57.165150] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:19.902 [2024-07-15 11:34:57.165768] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:16:19.902 [2024-07-15 11:34:57.165839] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:16:19.902 [2024-07-15 11:34:57.165915] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:16:19.902 [2024-07-15 11:34:57.165922] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:20.839 11:34:57 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:20.839 11:34:57 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@862 -- # return 0 00:16:20.839 11:34:57 nvmf_tcp.nvmf_aer -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:20.839 11:34:57 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:20.839 11:34:57 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:16:20.839 11:34:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:20.839 11:34:58 nvmf_tcp.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:20.839 11:34:58 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:20.839 11:34:58 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:16:20.839 [2024-07-15 11:34:58.027121] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:20.839 11:34:58 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:20.839 11:34:58 nvmf_tcp.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:16:20.839 11:34:58 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:20.839 11:34:58 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:16:20.839 Malloc0 00:16:20.839 11:34:58 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:20.839 11:34:58 nvmf_tcp.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:16:20.839 11:34:58 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:20.839 11:34:58 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:16:20.839 11:34:58 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:20.839 11:34:58 nvmf_tcp.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:16:20.839 11:34:58 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:20.839 11:34:58 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:16:20.839 11:34:58 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:20.839 11:34:58 nvmf_tcp.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:20.839 11:34:58 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:20.839 11:34:58 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:16:20.839 [2024-07-15 11:34:58.079819] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:20.839 11:34:58 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:20.839 11:34:58 nvmf_tcp.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:16:20.839 11:34:58 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:20.839 11:34:58 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:16:20.839 [ 00:16:20.839 { 00:16:20.839 "allow_any_host": true, 00:16:20.839 "hosts": [], 00:16:20.839 "listen_addresses": [], 00:16:20.839 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:16:20.839 "subtype": "Discovery" 00:16:20.839 }, 00:16:20.839 { 00:16:20.839 "allow_any_host": true, 00:16:20.839 "hosts": [], 00:16:20.839 "listen_addresses": [ 00:16:20.839 { 00:16:20.839 "adrfam": "IPv4", 00:16:20.839 "traddr": "10.0.0.2", 00:16:20.839 "trsvcid": "4420", 00:16:20.839 "trtype": "TCP" 00:16:20.839 } 00:16:20.839 ], 00:16:20.839 "max_cntlid": 65519, 00:16:20.839 "max_namespaces": 2, 00:16:20.839 "min_cntlid": 1, 00:16:20.839 "model_number": "SPDK bdev Controller", 00:16:20.839 "namespaces": [ 00:16:20.839 { 00:16:20.839 "bdev_name": "Malloc0", 00:16:20.839 "name": "Malloc0", 00:16:20.839 "nguid": "8A14063CDD9545A3B8034154FEEEEF8F", 00:16:20.839 "nsid": 1, 00:16:20.839 "uuid": "8a14063c-dd95-45a3-b803-4154feeeef8f" 00:16:20.839 } 00:16:20.839 ], 00:16:20.839 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:16:20.839 "serial_number": "SPDK00000000000001", 00:16:20.839 "subtype": "NVMe" 00:16:20.839 } 00:16:20.839 ] 00:16:20.839 11:34:58 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:20.839 11:34:58 nvmf_tcp.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:16:20.839 11:34:58 nvmf_tcp.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:16:20.839 11:34:58 nvmf_tcp.nvmf_aer -- host/aer.sh@33 -- # aerpid=86320 00:16:20.839 11:34:58 nvmf_tcp.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:16:20.839 11:34:58 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1265 -- # local i=0 00:16:20.839 11:34:58 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:16:20.839 11:34:58 nvmf_tcp.nvmf_aer -- host/aer.sh@27 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:16:20.839 11:34:58 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 0 -lt 200 ']' 00:16:20.839 11:34:58 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1268 -- # i=1 00:16:20.839 11:34:58 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:16:20.839 11:34:58 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:16:20.839 11:34:58 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 1 -lt 200 ']' 00:16:20.839 11:34:58 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1268 -- # i=2 00:16:20.839 11:34:58 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:16:20.839 11:34:58 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:16:20.839 11:34:58 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:16:20.839 11:34:58 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1276 -- # return 0 00:16:20.839 11:34:58 nvmf_tcp.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:16:20.839 11:34:58 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:20.839 11:34:58 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:16:21.099 Malloc1 00:16:21.099 11:34:58 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:21.099 11:34:58 nvmf_tcp.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:16:21.099 11:34:58 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:21.099 11:34:58 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:16:21.099 11:34:58 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:21.099 11:34:58 nvmf_tcp.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:16:21.099 11:34:58 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:21.099 11:34:58 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:16:21.099 [ 00:16:21.099 { 00:16:21.099 "allow_any_host": true, 00:16:21.099 "hosts": [], 00:16:21.099 "listen_addresses": [], 00:16:21.099 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:16:21.099 "subtype": "Discovery" 00:16:21.099 }, 00:16:21.099 { 00:16:21.099 "allow_any_host": true, 00:16:21.099 "hosts": [], 00:16:21.099 "listen_addresses": [ 00:16:21.099 { 00:16:21.099 "adrfam": "IPv4", 00:16:21.099 "traddr": "10.0.0.2", 00:16:21.099 "trsvcid": "4420", 00:16:21.099 "trtype": "TCP" 00:16:21.099 } 00:16:21.099 ], 00:16:21.099 "max_cntlid": 65519, 00:16:21.099 "max_namespaces": 2, 00:16:21.099 "min_cntlid": 1, 00:16:21.099 "model_number": "SPDK bdev Controller", 00:16:21.099 "namespaces": [ 00:16:21.099 { 00:16:21.099 "bdev_name": "Malloc0", 00:16:21.099 "name": "Malloc0", 00:16:21.099 "nguid": "8A14063CDD9545A3B8034154FEEEEF8F", 00:16:21.099 Asynchronous Event Request test 00:16:21.099 Attaching to 10.0.0.2 00:16:21.099 Attached to 10.0.0.2 00:16:21.099 Registering asynchronous event callbacks... 00:16:21.099 Starting namespace attribute notice tests for all controllers... 00:16:21.099 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:16:21.099 aer_cb - Changed Namespace 00:16:21.099 Cleaning up... 00:16:21.099 "nsid": 1, 00:16:21.099 "uuid": "8a14063c-dd95-45a3-b803-4154feeeef8f" 00:16:21.099 }, 00:16:21.099 { 00:16:21.099 "bdev_name": "Malloc1", 00:16:21.099 "name": "Malloc1", 00:16:21.099 "nguid": "3896FEE7F56C428B8A90F3BA7B2F6A49", 00:16:21.099 "nsid": 2, 00:16:21.099 "uuid": "3896fee7-f56c-428b-8a90-f3ba7b2f6a49" 00:16:21.099 } 00:16:21.099 ], 00:16:21.099 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:16:21.099 "serial_number": "SPDK00000000000001", 00:16:21.099 "subtype": "NVMe" 00:16:21.099 } 00:16:21.099 ] 00:16:21.099 11:34:58 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:21.099 11:34:58 nvmf_tcp.nvmf_aer -- host/aer.sh@43 -- # wait 86320 00:16:21.099 11:34:58 nvmf_tcp.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:16:21.099 11:34:58 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:21.099 11:34:58 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:16:21.099 11:34:58 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:21.099 11:34:58 nvmf_tcp.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:16:21.099 11:34:58 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:21.099 11:34:58 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:16:21.099 11:34:58 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:21.099 11:34:58 nvmf_tcp.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:21.099 11:34:58 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:21.099 11:34:58 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:16:21.099 11:34:58 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:21.099 11:34:58 nvmf_tcp.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:16:21.099 11:34:58 nvmf_tcp.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:16:21.099 11:34:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:21.099 11:34:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@117 -- # sync 00:16:21.099 11:34:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:21.099 11:34:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@120 -- # set +e 00:16:21.099 11:34:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:21.099 11:34:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:21.099 rmmod nvme_tcp 00:16:21.099 rmmod nvme_fabrics 00:16:21.099 rmmod nvme_keyring 00:16:21.099 11:34:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:21.099 11:34:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@124 -- # set -e 00:16:21.099 11:34:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@125 -- # return 0 00:16:21.099 11:34:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@489 -- # '[' -n 86266 ']' 00:16:21.099 11:34:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@490 -- # killprocess 86266 00:16:21.099 11:34:58 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@948 -- # '[' -z 86266 ']' 00:16:21.099 11:34:58 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@952 -- # kill -0 86266 00:16:21.099 11:34:58 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@953 -- # uname 00:16:21.099 11:34:58 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:21.099 11:34:58 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 86266 00:16:21.099 11:34:58 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:16:21.099 11:34:58 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:16:21.099 killing process with pid 86266 00:16:21.099 11:34:58 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@966 -- # echo 'killing process with pid 86266' 00:16:21.099 11:34:58 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@967 -- # kill 86266 00:16:21.099 11:34:58 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@972 -- # wait 86266 00:16:21.358 11:34:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:21.358 11:34:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:21.358 11:34:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:21.358 11:34:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:21.358 11:34:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:21.358 11:34:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:21.358 11:34:58 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:21.358 11:34:58 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:21.358 11:34:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:16:21.358 00:16:21.358 real 0m2.292s 00:16:21.358 user 0m6.434s 00:16:21.358 sys 0m0.547s 00:16:21.358 11:34:58 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:21.358 11:34:58 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:16:21.358 ************************************ 00:16:21.358 END TEST nvmf_aer 00:16:21.358 ************************************ 00:16:21.358 11:34:58 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:16:21.358 11:34:58 nvmf_tcp -- nvmf/nvmf.sh@93 -- # run_test nvmf_async_init /home/vagrant/spdk_repo/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:16:21.358 11:34:58 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:16:21.358 11:34:58 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:21.358 11:34:58 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:21.358 ************************************ 00:16:21.358 START TEST nvmf_async_init 00:16:21.358 ************************************ 00:16:21.358 11:34:58 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:16:21.618 * Looking for test storage... 00:16:21.618 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:16:21.618 11:34:58 nvmf_tcp.nvmf_async_init -- host/async_init.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:21.618 11:34:58 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:16:21.618 11:34:58 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:21.618 11:34:58 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:21.618 11:34:58 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:21.618 11:34:58 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:21.618 11:34:58 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:21.618 11:34:58 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:21.618 11:34:58 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:21.618 11:34:58 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:21.618 11:34:58 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:21.618 11:34:58 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:21.618 11:34:58 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:891080d4-f96c-4735-b9e2-e3ce9892e421 00:16:21.618 11:34:58 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=891080d4-f96c-4735-b9e2-e3ce9892e421 00:16:21.618 11:34:58 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:21.618 11:34:58 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:21.618 11:34:58 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:21.618 11:34:58 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:21.618 11:34:58 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:21.618 11:34:58 nvmf_tcp.nvmf_async_init -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:21.618 11:34:58 nvmf_tcp.nvmf_async_init -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:21.618 11:34:58 nvmf_tcp.nvmf_async_init -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:21.618 11:34:58 nvmf_tcp.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:21.618 11:34:58 nvmf_tcp.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:21.618 11:34:58 nvmf_tcp.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:21.618 11:34:58 nvmf_tcp.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:16:21.618 11:34:58 nvmf_tcp.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:21.618 11:34:58 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@47 -- # : 0 00:16:21.618 11:34:58 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:21.618 11:34:58 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:21.618 11:34:58 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:21.618 11:34:58 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:21.618 11:34:58 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:21.618 11:34:58 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:21.618 11:34:58 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:21.618 11:34:58 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:21.618 11:34:58 nvmf_tcp.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:16:21.618 11:34:58 nvmf_tcp.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:16:21.618 11:34:58 nvmf_tcp.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:16:21.618 11:34:58 nvmf_tcp.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:16:21.618 11:34:58 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:16:21.618 11:34:58 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:16:21.618 11:34:58 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # nguid=7fc76ae6276d4cacb872716c343a8837 00:16:21.618 11:34:58 nvmf_tcp.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:16:21.618 11:34:58 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:21.618 11:34:58 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:21.618 11:34:58 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:21.618 11:34:58 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:21.618 11:34:58 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:21.618 11:34:58 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:21.618 11:34:58 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:21.618 11:34:58 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:21.618 11:34:58 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:16:21.618 11:34:58 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:16:21.618 11:34:58 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:16:21.618 11:34:58 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:16:21.618 11:34:58 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:16:21.618 11:34:58 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@432 -- # nvmf_veth_init 00:16:21.618 11:34:58 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:21.618 11:34:58 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:21.618 11:34:58 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:16:21.618 11:34:58 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:16:21.618 11:34:58 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:21.618 11:34:58 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:21.618 11:34:58 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:21.618 11:34:58 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:21.618 11:34:58 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:21.618 11:34:58 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:21.618 11:34:58 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:21.618 11:34:58 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:21.618 11:34:58 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:16:21.618 11:34:58 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:16:21.618 Cannot find device "nvmf_tgt_br" 00:16:21.618 11:34:58 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@155 -- # true 00:16:21.618 11:34:58 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:16:21.618 Cannot find device "nvmf_tgt_br2" 00:16:21.618 11:34:58 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@156 -- # true 00:16:21.618 11:34:58 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:16:21.618 11:34:58 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:16:21.618 Cannot find device "nvmf_tgt_br" 00:16:21.618 11:34:58 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@158 -- # true 00:16:21.618 11:34:58 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:16:21.618 Cannot find device "nvmf_tgt_br2" 00:16:21.618 11:34:58 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@159 -- # true 00:16:21.618 11:34:58 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:16:21.618 11:34:58 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:16:21.618 11:34:59 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:21.618 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:21.618 11:34:59 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@162 -- # true 00:16:21.619 11:34:59 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:21.619 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:21.619 11:34:59 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@163 -- # true 00:16:21.619 11:34:59 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:16:21.619 11:34:59 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:21.619 11:34:59 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:21.619 11:34:59 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:21.877 11:34:59 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:21.877 11:34:59 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:21.877 11:34:59 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:21.877 11:34:59 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:16:21.877 11:34:59 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:16:21.877 11:34:59 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:16:21.877 11:34:59 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:16:21.877 11:34:59 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:16:21.877 11:34:59 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:16:21.877 11:34:59 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:21.877 11:34:59 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:21.877 11:34:59 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:21.877 11:34:59 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:16:21.877 11:34:59 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:16:21.877 11:34:59 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:16:21.877 11:34:59 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:21.877 11:34:59 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:21.877 11:34:59 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:21.877 11:34:59 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:21.877 11:34:59 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:16:21.877 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:21.877 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.100 ms 00:16:21.877 00:16:21.877 --- 10.0.0.2 ping statistics --- 00:16:21.877 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:21.877 rtt min/avg/max/mdev = 0.100/0.100/0.100/0.000 ms 00:16:21.877 11:34:59 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:16:21.877 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:21.877 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.051 ms 00:16:21.877 00:16:21.877 --- 10.0.0.3 ping statistics --- 00:16:21.877 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:21.877 rtt min/avg/max/mdev = 0.051/0.051/0.051/0.000 ms 00:16:21.877 11:34:59 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:21.877 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:21.877 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.034 ms 00:16:21.877 00:16:21.877 --- 10.0.0.1 ping statistics --- 00:16:21.877 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:21.877 rtt min/avg/max/mdev = 0.034/0.034/0.034/0.000 ms 00:16:21.877 11:34:59 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:21.877 11:34:59 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@433 -- # return 0 00:16:21.877 11:34:59 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:21.877 11:34:59 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:21.877 11:34:59 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:21.877 11:34:59 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:21.877 11:34:59 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:21.877 11:34:59 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:21.877 11:34:59 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:21.877 11:34:59 nvmf_tcp.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:16:21.877 11:34:59 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:21.877 11:34:59 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:21.877 11:34:59 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:16:21.877 11:34:59 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@481 -- # nvmfpid=86490 00:16:21.877 11:34:59 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:16:21.877 11:34:59 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@482 -- # waitforlisten 86490 00:16:21.877 11:34:59 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@829 -- # '[' -z 86490 ']' 00:16:21.877 11:34:59 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:21.877 11:34:59 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:21.878 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:21.878 11:34:59 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:21.878 11:34:59 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:21.878 11:34:59 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:16:21.878 [2024-07-15 11:34:59.338990] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:16:21.878 [2024-07-15 11:34:59.339084] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:22.163 [2024-07-15 11:34:59.474010] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:22.163 [2024-07-15 11:34:59.531461] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:22.163 [2024-07-15 11:34:59.531526] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:22.163 [2024-07-15 11:34:59.531538] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:22.163 [2024-07-15 11:34:59.531559] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:22.163 [2024-07-15 11:34:59.531568] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:22.163 [2024-07-15 11:34:59.531603] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:23.106 11:35:00 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:23.106 11:35:00 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@862 -- # return 0 00:16:23.106 11:35:00 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:23.106 11:35:00 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:23.106 11:35:00 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:16:23.106 11:35:00 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:23.106 11:35:00 nvmf_tcp.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:16:23.106 11:35:00 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:23.106 11:35:00 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:16:23.106 [2024-07-15 11:35:00.366169] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:23.106 11:35:00 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:23.106 11:35:00 nvmf_tcp.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:16:23.106 11:35:00 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:23.106 11:35:00 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:16:23.106 null0 00:16:23.106 11:35:00 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:23.106 11:35:00 nvmf_tcp.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:16:23.106 11:35:00 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:23.106 11:35:00 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:16:23.106 11:35:00 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:23.106 11:35:00 nvmf_tcp.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:16:23.106 11:35:00 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:23.106 11:35:00 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:16:23.106 11:35:00 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:23.106 11:35:00 nvmf_tcp.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g 7fc76ae6276d4cacb872716c343a8837 00:16:23.106 11:35:00 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:23.106 11:35:00 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:16:23.106 11:35:00 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:23.106 11:35:00 nvmf_tcp.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:16:23.106 11:35:00 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:23.106 11:35:00 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:16:23.106 [2024-07-15 11:35:00.426284] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:23.106 11:35:00 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:23.106 11:35:00 nvmf_tcp.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:16:23.106 11:35:00 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:23.106 11:35:00 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:16:23.365 nvme0n1 00:16:23.365 11:35:00 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:23.365 11:35:00 nvmf_tcp.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:16:23.365 11:35:00 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:23.365 11:35:00 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:16:23.365 [ 00:16:23.365 { 00:16:23.365 "aliases": [ 00:16:23.365 "7fc76ae6-276d-4cac-b872-716c343a8837" 00:16:23.365 ], 00:16:23.365 "assigned_rate_limits": { 00:16:23.365 "r_mbytes_per_sec": 0, 00:16:23.365 "rw_ios_per_sec": 0, 00:16:23.365 "rw_mbytes_per_sec": 0, 00:16:23.365 "w_mbytes_per_sec": 0 00:16:23.365 }, 00:16:23.365 "block_size": 512, 00:16:23.365 "claimed": false, 00:16:23.365 "driver_specific": { 00:16:23.365 "mp_policy": "active_passive", 00:16:23.365 "nvme": [ 00:16:23.365 { 00:16:23.365 "ctrlr_data": { 00:16:23.365 "ana_reporting": false, 00:16:23.365 "cntlid": 1, 00:16:23.365 "firmware_revision": "24.09", 00:16:23.365 "model_number": "SPDK bdev Controller", 00:16:23.365 "multi_ctrlr": true, 00:16:23.365 "oacs": { 00:16:23.365 "firmware": 0, 00:16:23.365 "format": 0, 00:16:23.365 "ns_manage": 0, 00:16:23.365 "security": 0 00:16:23.365 }, 00:16:23.365 "serial_number": "00000000000000000000", 00:16:23.365 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:16:23.365 "vendor_id": "0x8086" 00:16:23.365 }, 00:16:23.365 "ns_data": { 00:16:23.365 "can_share": true, 00:16:23.365 "id": 1 00:16:23.365 }, 00:16:23.365 "trid": { 00:16:23.365 "adrfam": "IPv4", 00:16:23.365 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:16:23.365 "traddr": "10.0.0.2", 00:16:23.365 "trsvcid": "4420", 00:16:23.365 "trtype": "TCP" 00:16:23.365 }, 00:16:23.365 "vs": { 00:16:23.365 "nvme_version": "1.3" 00:16:23.365 } 00:16:23.365 } 00:16:23.365 ] 00:16:23.365 }, 00:16:23.365 "memory_domains": [ 00:16:23.365 { 00:16:23.365 "dma_device_id": "system", 00:16:23.365 "dma_device_type": 1 00:16:23.365 } 00:16:23.365 ], 00:16:23.365 "name": "nvme0n1", 00:16:23.365 "num_blocks": 2097152, 00:16:23.365 "product_name": "NVMe disk", 00:16:23.365 "supported_io_types": { 00:16:23.365 "abort": true, 00:16:23.365 "compare": true, 00:16:23.365 "compare_and_write": true, 00:16:23.365 "copy": true, 00:16:23.365 "flush": true, 00:16:23.365 "get_zone_info": false, 00:16:23.365 "nvme_admin": true, 00:16:23.365 "nvme_io": true, 00:16:23.365 "nvme_io_md": false, 00:16:23.365 "nvme_iov_md": false, 00:16:23.365 "read": true, 00:16:23.365 "reset": true, 00:16:23.365 "seek_data": false, 00:16:23.366 "seek_hole": false, 00:16:23.366 "unmap": false, 00:16:23.366 "write": true, 00:16:23.366 "write_zeroes": true, 00:16:23.366 "zcopy": false, 00:16:23.366 "zone_append": false, 00:16:23.366 "zone_management": false 00:16:23.366 }, 00:16:23.366 "uuid": "7fc76ae6-276d-4cac-b872-716c343a8837", 00:16:23.366 "zoned": false 00:16:23.366 } 00:16:23.366 ] 00:16:23.366 11:35:00 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:23.366 11:35:00 nvmf_tcp.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:16:23.366 11:35:00 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:23.366 11:35:00 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:16:23.366 [2024-07-15 11:35:00.707041] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:16:23.366 [2024-07-15 11:35:00.707356] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2205a30 (9): Bad file descriptor 00:16:23.629 [2024-07-15 11:35:00.849758] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:16:23.629 11:35:00 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:23.629 11:35:00 nvmf_tcp.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:16:23.629 11:35:00 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:23.629 11:35:00 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:16:23.629 [ 00:16:23.629 { 00:16:23.629 "aliases": [ 00:16:23.629 "7fc76ae6-276d-4cac-b872-716c343a8837" 00:16:23.629 ], 00:16:23.629 "assigned_rate_limits": { 00:16:23.629 "r_mbytes_per_sec": 0, 00:16:23.629 "rw_ios_per_sec": 0, 00:16:23.629 "rw_mbytes_per_sec": 0, 00:16:23.629 "w_mbytes_per_sec": 0 00:16:23.629 }, 00:16:23.629 "block_size": 512, 00:16:23.629 "claimed": false, 00:16:23.629 "driver_specific": { 00:16:23.629 "mp_policy": "active_passive", 00:16:23.629 "nvme": [ 00:16:23.629 { 00:16:23.629 "ctrlr_data": { 00:16:23.629 "ana_reporting": false, 00:16:23.629 "cntlid": 2, 00:16:23.629 "firmware_revision": "24.09", 00:16:23.629 "model_number": "SPDK bdev Controller", 00:16:23.629 "multi_ctrlr": true, 00:16:23.629 "oacs": { 00:16:23.629 "firmware": 0, 00:16:23.629 "format": 0, 00:16:23.629 "ns_manage": 0, 00:16:23.629 "security": 0 00:16:23.629 }, 00:16:23.629 "serial_number": "00000000000000000000", 00:16:23.629 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:16:23.629 "vendor_id": "0x8086" 00:16:23.629 }, 00:16:23.629 "ns_data": { 00:16:23.629 "can_share": true, 00:16:23.629 "id": 1 00:16:23.629 }, 00:16:23.629 "trid": { 00:16:23.629 "adrfam": "IPv4", 00:16:23.629 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:16:23.629 "traddr": "10.0.0.2", 00:16:23.629 "trsvcid": "4420", 00:16:23.629 "trtype": "TCP" 00:16:23.629 }, 00:16:23.629 "vs": { 00:16:23.629 "nvme_version": "1.3" 00:16:23.629 } 00:16:23.629 } 00:16:23.629 ] 00:16:23.629 }, 00:16:23.629 "memory_domains": [ 00:16:23.629 { 00:16:23.629 "dma_device_id": "system", 00:16:23.629 "dma_device_type": 1 00:16:23.629 } 00:16:23.629 ], 00:16:23.629 "name": "nvme0n1", 00:16:23.629 "num_blocks": 2097152, 00:16:23.629 "product_name": "NVMe disk", 00:16:23.629 "supported_io_types": { 00:16:23.629 "abort": true, 00:16:23.629 "compare": true, 00:16:23.629 "compare_and_write": true, 00:16:23.629 "copy": true, 00:16:23.629 "flush": true, 00:16:23.629 "get_zone_info": false, 00:16:23.629 "nvme_admin": true, 00:16:23.629 "nvme_io": true, 00:16:23.629 "nvme_io_md": false, 00:16:23.629 "nvme_iov_md": false, 00:16:23.629 "read": true, 00:16:23.629 "reset": true, 00:16:23.629 "seek_data": false, 00:16:23.629 "seek_hole": false, 00:16:23.629 "unmap": false, 00:16:23.629 "write": true, 00:16:23.629 "write_zeroes": true, 00:16:23.629 "zcopy": false, 00:16:23.629 "zone_append": false, 00:16:23.629 "zone_management": false 00:16:23.629 }, 00:16:23.629 "uuid": "7fc76ae6-276d-4cac-b872-716c343a8837", 00:16:23.629 "zoned": false 00:16:23.629 } 00:16:23.629 ] 00:16:23.629 11:35:00 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:23.629 11:35:00 nvmf_tcp.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:23.629 11:35:00 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:23.629 11:35:00 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:16:23.629 11:35:00 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:23.629 11:35:00 nvmf_tcp.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:16:23.629 11:35:00 nvmf_tcp.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.JGPfw0SfrR 00:16:23.629 11:35:00 nvmf_tcp.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:16:23.629 11:35:00 nvmf_tcp.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.JGPfw0SfrR 00:16:23.629 11:35:00 nvmf_tcp.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:16:23.629 11:35:00 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:23.629 11:35:00 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:16:23.629 11:35:00 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:23.629 11:35:00 nvmf_tcp.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:16:23.629 11:35:00 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:23.629 11:35:00 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:16:23.629 [2024-07-15 11:35:00.927270] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:16:23.629 [2024-07-15 11:35:00.927458] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:16:23.629 11:35:00 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:23.629 11:35:00 nvmf_tcp.nvmf_async_init -- host/async_init.sh@59 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.JGPfw0SfrR 00:16:23.629 11:35:00 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:23.629 11:35:00 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:16:23.629 [2024-07-15 11:35:00.935302] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:16:23.629 11:35:00 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:23.629 11:35:00 nvmf_tcp.nvmf_async_init -- host/async_init.sh@65 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.JGPfw0SfrR 00:16:23.629 11:35:00 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:23.629 11:35:00 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:16:23.629 [2024-07-15 11:35:00.943287] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:16:23.629 [2024-07-15 11:35:00.943383] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:16:23.629 nvme0n1 00:16:23.629 11:35:01 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:23.629 11:35:01 nvmf_tcp.nvmf_async_init -- host/async_init.sh@69 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:16:23.629 11:35:01 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:23.629 11:35:01 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:16:23.629 [ 00:16:23.629 { 00:16:23.629 "aliases": [ 00:16:23.629 "7fc76ae6-276d-4cac-b872-716c343a8837" 00:16:23.629 ], 00:16:23.629 "assigned_rate_limits": { 00:16:23.629 "r_mbytes_per_sec": 0, 00:16:23.629 "rw_ios_per_sec": 0, 00:16:23.629 "rw_mbytes_per_sec": 0, 00:16:23.629 "w_mbytes_per_sec": 0 00:16:23.629 }, 00:16:23.629 "block_size": 512, 00:16:23.629 "claimed": false, 00:16:23.629 "driver_specific": { 00:16:23.629 "mp_policy": "active_passive", 00:16:23.629 "nvme": [ 00:16:23.629 { 00:16:23.629 "ctrlr_data": { 00:16:23.629 "ana_reporting": false, 00:16:23.629 "cntlid": 3, 00:16:23.629 "firmware_revision": "24.09", 00:16:23.629 "model_number": "SPDK bdev Controller", 00:16:23.629 "multi_ctrlr": true, 00:16:23.629 "oacs": { 00:16:23.629 "firmware": 0, 00:16:23.629 "format": 0, 00:16:23.629 "ns_manage": 0, 00:16:23.629 "security": 0 00:16:23.629 }, 00:16:23.629 "serial_number": "00000000000000000000", 00:16:23.629 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:16:23.629 "vendor_id": "0x8086" 00:16:23.629 }, 00:16:23.629 "ns_data": { 00:16:23.629 "can_share": true, 00:16:23.629 "id": 1 00:16:23.629 }, 00:16:23.629 "trid": { 00:16:23.629 "adrfam": "IPv4", 00:16:23.629 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:16:23.629 "traddr": "10.0.0.2", 00:16:23.629 "trsvcid": "4421", 00:16:23.629 "trtype": "TCP" 00:16:23.629 }, 00:16:23.629 "vs": { 00:16:23.629 "nvme_version": "1.3" 00:16:23.629 } 00:16:23.629 } 00:16:23.629 ] 00:16:23.629 }, 00:16:23.629 "memory_domains": [ 00:16:23.629 { 00:16:23.629 "dma_device_id": "system", 00:16:23.629 "dma_device_type": 1 00:16:23.629 } 00:16:23.629 ], 00:16:23.629 "name": "nvme0n1", 00:16:23.629 "num_blocks": 2097152, 00:16:23.629 "product_name": "NVMe disk", 00:16:23.629 "supported_io_types": { 00:16:23.629 "abort": true, 00:16:23.629 "compare": true, 00:16:23.629 "compare_and_write": true, 00:16:23.629 "copy": true, 00:16:23.629 "flush": true, 00:16:23.629 "get_zone_info": false, 00:16:23.629 "nvme_admin": true, 00:16:23.629 "nvme_io": true, 00:16:23.629 "nvme_io_md": false, 00:16:23.629 "nvme_iov_md": false, 00:16:23.629 "read": true, 00:16:23.629 "reset": true, 00:16:23.629 "seek_data": false, 00:16:23.629 "seek_hole": false, 00:16:23.629 "unmap": false, 00:16:23.629 "write": true, 00:16:23.629 "write_zeroes": true, 00:16:23.629 "zcopy": false, 00:16:23.629 "zone_append": false, 00:16:23.629 "zone_management": false 00:16:23.629 }, 00:16:23.629 "uuid": "7fc76ae6-276d-4cac-b872-716c343a8837", 00:16:23.629 "zoned": false 00:16:23.629 } 00:16:23.629 ] 00:16:23.629 11:35:01 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:23.629 11:35:01 nvmf_tcp.nvmf_async_init -- host/async_init.sh@72 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:23.629 11:35:01 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:23.629 11:35:01 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:16:23.629 11:35:01 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:23.629 11:35:01 nvmf_tcp.nvmf_async_init -- host/async_init.sh@75 -- # rm -f /tmp/tmp.JGPfw0SfrR 00:16:23.629 11:35:01 nvmf_tcp.nvmf_async_init -- host/async_init.sh@77 -- # trap - SIGINT SIGTERM EXIT 00:16:23.629 11:35:01 nvmf_tcp.nvmf_async_init -- host/async_init.sh@78 -- # nvmftestfini 00:16:23.629 11:35:01 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:23.629 11:35:01 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@117 -- # sync 00:16:23.629 11:35:01 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:23.629 11:35:01 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@120 -- # set +e 00:16:23.629 11:35:01 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:23.629 11:35:01 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:23.629 rmmod nvme_tcp 00:16:23.889 rmmod nvme_fabrics 00:16:23.889 rmmod nvme_keyring 00:16:23.889 11:35:01 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:23.889 11:35:01 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@124 -- # set -e 00:16:23.889 11:35:01 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@125 -- # return 0 00:16:23.889 11:35:01 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@489 -- # '[' -n 86490 ']' 00:16:23.889 11:35:01 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@490 -- # killprocess 86490 00:16:23.889 11:35:01 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@948 -- # '[' -z 86490 ']' 00:16:23.889 11:35:01 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@952 -- # kill -0 86490 00:16:23.889 11:35:01 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@953 -- # uname 00:16:23.889 11:35:01 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:23.889 11:35:01 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 86490 00:16:23.889 11:35:01 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:16:23.889 11:35:01 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:16:23.889 11:35:01 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@966 -- # echo 'killing process with pid 86490' 00:16:23.889 killing process with pid 86490 00:16:23.889 11:35:01 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@967 -- # kill 86490 00:16:23.889 [2024-07-15 11:35:01.187532] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:16:23.889 [2024-07-15 11:35:01.187588] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:16:23.889 11:35:01 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@972 -- # wait 86490 00:16:23.889 11:35:01 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:23.889 11:35:01 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:23.889 11:35:01 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:23.889 11:35:01 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:23.889 11:35:01 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:23.889 11:35:01 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:23.889 11:35:01 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:23.889 11:35:01 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:24.148 11:35:01 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:16:24.148 ************************************ 00:16:24.148 END TEST nvmf_async_init 00:16:24.148 ************************************ 00:16:24.148 00:16:24.148 real 0m2.588s 00:16:24.148 user 0m2.434s 00:16:24.148 sys 0m0.553s 00:16:24.148 11:35:01 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:24.148 11:35:01 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:16:24.148 11:35:01 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:16:24.148 11:35:01 nvmf_tcp -- nvmf/nvmf.sh@94 -- # run_test dma /home/vagrant/spdk_repo/spdk/test/nvmf/host/dma.sh --transport=tcp 00:16:24.148 11:35:01 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:16:24.148 11:35:01 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:24.148 11:35:01 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:24.148 ************************************ 00:16:24.148 START TEST dma 00:16:24.148 ************************************ 00:16:24.148 11:35:01 nvmf_tcp.dma -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/dma.sh --transport=tcp 00:16:24.148 * Looking for test storage... 00:16:24.148 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:16:24.148 11:35:01 nvmf_tcp.dma -- host/dma.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:24.148 11:35:01 nvmf_tcp.dma -- nvmf/common.sh@7 -- # uname -s 00:16:24.148 11:35:01 nvmf_tcp.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:24.148 11:35:01 nvmf_tcp.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:24.148 11:35:01 nvmf_tcp.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:24.148 11:35:01 nvmf_tcp.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:24.148 11:35:01 nvmf_tcp.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:24.148 11:35:01 nvmf_tcp.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:24.148 11:35:01 nvmf_tcp.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:24.148 11:35:01 nvmf_tcp.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:24.148 11:35:01 nvmf_tcp.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:24.148 11:35:01 nvmf_tcp.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:24.148 11:35:01 nvmf_tcp.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:891080d4-f96c-4735-b9e2-e3ce9892e421 00:16:24.148 11:35:01 nvmf_tcp.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=891080d4-f96c-4735-b9e2-e3ce9892e421 00:16:24.148 11:35:01 nvmf_tcp.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:24.148 11:35:01 nvmf_tcp.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:24.148 11:35:01 nvmf_tcp.dma -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:24.148 11:35:01 nvmf_tcp.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:24.148 11:35:01 nvmf_tcp.dma -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:24.148 11:35:01 nvmf_tcp.dma -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:24.148 11:35:01 nvmf_tcp.dma -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:24.148 11:35:01 nvmf_tcp.dma -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:24.148 11:35:01 nvmf_tcp.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:24.148 11:35:01 nvmf_tcp.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:24.148 11:35:01 nvmf_tcp.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:24.148 11:35:01 nvmf_tcp.dma -- paths/export.sh@5 -- # export PATH 00:16:24.148 11:35:01 nvmf_tcp.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:24.148 11:35:01 nvmf_tcp.dma -- nvmf/common.sh@47 -- # : 0 00:16:24.148 11:35:01 nvmf_tcp.dma -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:24.148 11:35:01 nvmf_tcp.dma -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:24.148 11:35:01 nvmf_tcp.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:24.148 11:35:01 nvmf_tcp.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:24.148 11:35:01 nvmf_tcp.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:24.148 11:35:01 nvmf_tcp.dma -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:24.148 11:35:01 nvmf_tcp.dma -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:24.148 11:35:01 nvmf_tcp.dma -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:24.148 11:35:01 nvmf_tcp.dma -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:16:24.148 11:35:01 nvmf_tcp.dma -- host/dma.sh@13 -- # exit 0 00:16:24.148 00:16:24.148 real 0m0.094s 00:16:24.148 user 0m0.045s 00:16:24.148 sys 0m0.055s 00:16:24.148 11:35:01 nvmf_tcp.dma -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:24.148 ************************************ 00:16:24.148 END TEST dma 00:16:24.148 ************************************ 00:16:24.148 11:35:01 nvmf_tcp.dma -- common/autotest_common.sh@10 -- # set +x 00:16:24.148 11:35:01 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:16:24.148 11:35:01 nvmf_tcp -- nvmf/nvmf.sh@97 -- # run_test nvmf_identify /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify.sh --transport=tcp 00:16:24.148 11:35:01 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:16:24.148 11:35:01 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:24.148 11:35:01 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:24.148 ************************************ 00:16:24.148 START TEST nvmf_identify 00:16:24.148 ************************************ 00:16:24.148 11:35:01 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify.sh --transport=tcp 00:16:24.407 * Looking for test storage... 00:16:24.407 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:16:24.407 11:35:01 nvmf_tcp.nvmf_identify -- host/identify.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:24.407 11:35:01 nvmf_tcp.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:16:24.407 11:35:01 nvmf_tcp.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:24.407 11:35:01 nvmf_tcp.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:24.407 11:35:01 nvmf_tcp.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:24.407 11:35:01 nvmf_tcp.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:24.407 11:35:01 nvmf_tcp.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:24.407 11:35:01 nvmf_tcp.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:24.407 11:35:01 nvmf_tcp.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:24.407 11:35:01 nvmf_tcp.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:24.407 11:35:01 nvmf_tcp.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:24.407 11:35:01 nvmf_tcp.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:24.407 11:35:01 nvmf_tcp.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:891080d4-f96c-4735-b9e2-e3ce9892e421 00:16:24.407 11:35:01 nvmf_tcp.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=891080d4-f96c-4735-b9e2-e3ce9892e421 00:16:24.407 11:35:01 nvmf_tcp.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:24.407 11:35:01 nvmf_tcp.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:24.407 11:35:01 nvmf_tcp.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:24.407 11:35:01 nvmf_tcp.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:24.407 11:35:01 nvmf_tcp.nvmf_identify -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:24.407 11:35:01 nvmf_tcp.nvmf_identify -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:24.407 11:35:01 nvmf_tcp.nvmf_identify -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:24.407 11:35:01 nvmf_tcp.nvmf_identify -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:24.407 11:35:01 nvmf_tcp.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:24.407 11:35:01 nvmf_tcp.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:24.407 11:35:01 nvmf_tcp.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:24.407 11:35:01 nvmf_tcp.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:16:24.407 11:35:01 nvmf_tcp.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:24.407 11:35:01 nvmf_tcp.nvmf_identify -- nvmf/common.sh@47 -- # : 0 00:16:24.407 11:35:01 nvmf_tcp.nvmf_identify -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:24.407 11:35:01 nvmf_tcp.nvmf_identify -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:24.407 11:35:01 nvmf_tcp.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:24.407 11:35:01 nvmf_tcp.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:24.407 11:35:01 nvmf_tcp.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:24.407 11:35:01 nvmf_tcp.nvmf_identify -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:24.407 11:35:01 nvmf_tcp.nvmf_identify -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:24.407 11:35:01 nvmf_tcp.nvmf_identify -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:24.407 11:35:01 nvmf_tcp.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:24.407 11:35:01 nvmf_tcp.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:24.407 11:35:01 nvmf_tcp.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:16:24.407 11:35:01 nvmf_tcp.nvmf_identify -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:24.407 11:35:01 nvmf_tcp.nvmf_identify -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:24.407 11:35:01 nvmf_tcp.nvmf_identify -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:24.407 11:35:01 nvmf_tcp.nvmf_identify -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:24.407 11:35:01 nvmf_tcp.nvmf_identify -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:24.407 11:35:01 nvmf_tcp.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:24.407 11:35:01 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:24.407 11:35:01 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:24.407 11:35:01 nvmf_tcp.nvmf_identify -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:16:24.407 11:35:01 nvmf_tcp.nvmf_identify -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:16:24.407 11:35:01 nvmf_tcp.nvmf_identify -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:16:24.407 11:35:01 nvmf_tcp.nvmf_identify -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:16:24.407 11:35:01 nvmf_tcp.nvmf_identify -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:16:24.407 11:35:01 nvmf_tcp.nvmf_identify -- nvmf/common.sh@432 -- # nvmf_veth_init 00:16:24.407 11:35:01 nvmf_tcp.nvmf_identify -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:24.407 11:35:01 nvmf_tcp.nvmf_identify -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:24.407 11:35:01 nvmf_tcp.nvmf_identify -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:16:24.407 11:35:01 nvmf_tcp.nvmf_identify -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:16:24.407 11:35:01 nvmf_tcp.nvmf_identify -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:24.407 11:35:01 nvmf_tcp.nvmf_identify -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:24.407 11:35:01 nvmf_tcp.nvmf_identify -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:24.407 11:35:01 nvmf_tcp.nvmf_identify -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:24.407 11:35:01 nvmf_tcp.nvmf_identify -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:24.407 11:35:01 nvmf_tcp.nvmf_identify -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:24.407 11:35:01 nvmf_tcp.nvmf_identify -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:24.407 11:35:01 nvmf_tcp.nvmf_identify -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:24.407 11:35:01 nvmf_tcp.nvmf_identify -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:16:24.407 11:35:01 nvmf_tcp.nvmf_identify -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:16:24.408 Cannot find device "nvmf_tgt_br" 00:16:24.408 11:35:01 nvmf_tcp.nvmf_identify -- nvmf/common.sh@155 -- # true 00:16:24.408 11:35:01 nvmf_tcp.nvmf_identify -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:16:24.408 Cannot find device "nvmf_tgt_br2" 00:16:24.408 11:35:01 nvmf_tcp.nvmf_identify -- nvmf/common.sh@156 -- # true 00:16:24.408 11:35:01 nvmf_tcp.nvmf_identify -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:16:24.408 11:35:01 nvmf_tcp.nvmf_identify -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:16:24.408 Cannot find device "nvmf_tgt_br" 00:16:24.408 11:35:01 nvmf_tcp.nvmf_identify -- nvmf/common.sh@158 -- # true 00:16:24.408 11:35:01 nvmf_tcp.nvmf_identify -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:16:24.408 Cannot find device "nvmf_tgt_br2" 00:16:24.408 11:35:01 nvmf_tcp.nvmf_identify -- nvmf/common.sh@159 -- # true 00:16:24.408 11:35:01 nvmf_tcp.nvmf_identify -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:16:24.408 11:35:01 nvmf_tcp.nvmf_identify -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:16:24.408 11:35:01 nvmf_tcp.nvmf_identify -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:24.408 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:24.408 11:35:01 nvmf_tcp.nvmf_identify -- nvmf/common.sh@162 -- # true 00:16:24.408 11:35:01 nvmf_tcp.nvmf_identify -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:24.408 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:24.408 11:35:01 nvmf_tcp.nvmf_identify -- nvmf/common.sh@163 -- # true 00:16:24.408 11:35:01 nvmf_tcp.nvmf_identify -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:16:24.408 11:35:01 nvmf_tcp.nvmf_identify -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:24.408 11:35:01 nvmf_tcp.nvmf_identify -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:24.408 11:35:01 nvmf_tcp.nvmf_identify -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:24.408 11:35:01 nvmf_tcp.nvmf_identify -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:24.667 11:35:01 nvmf_tcp.nvmf_identify -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:24.667 11:35:01 nvmf_tcp.nvmf_identify -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:24.667 11:35:01 nvmf_tcp.nvmf_identify -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:16:24.667 11:35:01 nvmf_tcp.nvmf_identify -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:16:24.667 11:35:01 nvmf_tcp.nvmf_identify -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:16:24.667 11:35:01 nvmf_tcp.nvmf_identify -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:16:24.667 11:35:01 nvmf_tcp.nvmf_identify -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:16:24.667 11:35:01 nvmf_tcp.nvmf_identify -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:16:24.667 11:35:01 nvmf_tcp.nvmf_identify -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:24.667 11:35:01 nvmf_tcp.nvmf_identify -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:24.667 11:35:01 nvmf_tcp.nvmf_identify -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:24.667 11:35:01 nvmf_tcp.nvmf_identify -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:16:24.667 11:35:01 nvmf_tcp.nvmf_identify -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:16:24.667 11:35:01 nvmf_tcp.nvmf_identify -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:16:24.667 11:35:01 nvmf_tcp.nvmf_identify -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:24.667 11:35:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:24.667 11:35:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:24.667 11:35:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:24.667 11:35:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:16:24.667 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:24.667 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.108 ms 00:16:24.667 00:16:24.667 --- 10.0.0.2 ping statistics --- 00:16:24.667 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:24.667 rtt min/avg/max/mdev = 0.108/0.108/0.108/0.000 ms 00:16:24.667 11:35:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:16:24.667 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:24.667 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.057 ms 00:16:24.667 00:16:24.667 --- 10.0.0.3 ping statistics --- 00:16:24.667 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:24.667 rtt min/avg/max/mdev = 0.057/0.057/0.057/0.000 ms 00:16:24.667 11:35:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:24.667 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:24.667 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.041 ms 00:16:24.667 00:16:24.667 --- 10.0.0.1 ping statistics --- 00:16:24.667 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:24.667 rtt min/avg/max/mdev = 0.041/0.041/0.041/0.000 ms 00:16:24.667 11:35:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:24.667 11:35:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@433 -- # return 0 00:16:24.667 11:35:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:24.667 11:35:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:24.667 11:35:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:24.667 11:35:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:24.667 11:35:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:24.667 11:35:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:24.667 11:35:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:24.667 11:35:02 nvmf_tcp.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:16:24.667 11:35:02 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:24.667 11:35:02 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:16:24.667 11:35:02 nvmf_tcp.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=86760 00:16:24.667 11:35:02 nvmf_tcp.nvmf_identify -- host/identify.sh@18 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:16:24.667 11:35:02 nvmf_tcp.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:16:24.667 11:35:02 nvmf_tcp.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 86760 00:16:24.667 11:35:02 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@829 -- # '[' -z 86760 ']' 00:16:24.667 11:35:02 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:24.667 11:35:02 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:24.667 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:24.667 11:35:02 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:24.667 11:35:02 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:24.667 11:35:02 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:16:24.667 [2024-07-15 11:35:02.132641] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:16:24.667 [2024-07-15 11:35:02.132749] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:24.925 [2024-07-15 11:35:02.272300] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:24.925 [2024-07-15 11:35:02.341694] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:24.925 [2024-07-15 11:35:02.341751] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:24.925 [2024-07-15 11:35:02.341765] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:24.925 [2024-07-15 11:35:02.341784] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:24.925 [2024-07-15 11:35:02.341793] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:24.925 [2024-07-15 11:35:02.341877] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:16:24.925 [2024-07-15 11:35:02.342014] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:16:24.925 [2024-07-15 11:35:02.342487] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:16:24.925 [2024-07-15 11:35:02.342525] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:25.859 11:35:03 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:25.859 11:35:03 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@862 -- # return 0 00:16:25.859 11:35:03 nvmf_tcp.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:25.859 11:35:03 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:25.859 11:35:03 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:16:25.859 [2024-07-15 11:35:03.239493] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:25.859 11:35:03 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:25.859 11:35:03 nvmf_tcp.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:16:25.859 11:35:03 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:25.859 11:35:03 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:16:25.859 11:35:03 nvmf_tcp.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:16:25.859 11:35:03 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:25.859 11:35:03 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:16:25.859 Malloc0 00:16:25.859 11:35:03 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:25.859 11:35:03 nvmf_tcp.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:16:25.859 11:35:03 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:25.859 11:35:03 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:16:25.859 11:35:03 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:25.859 11:35:03 nvmf_tcp.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:16:25.859 11:35:03 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:25.859 11:35:03 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:16:25.859 11:35:03 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:25.859 11:35:03 nvmf_tcp.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:25.859 11:35:03 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:25.859 11:35:03 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:16:25.859 [2024-07-15 11:35:03.328526] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:25.859 11:35:03 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:25.859 11:35:03 nvmf_tcp.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:16:25.859 11:35:03 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:25.859 11:35:03 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:16:26.116 11:35:03 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:26.116 11:35:03 nvmf_tcp.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:16:26.116 11:35:03 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:26.117 11:35:03 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:16:26.117 [ 00:16:26.117 { 00:16:26.117 "allow_any_host": true, 00:16:26.117 "hosts": [], 00:16:26.117 "listen_addresses": [ 00:16:26.117 { 00:16:26.117 "adrfam": "IPv4", 00:16:26.117 "traddr": "10.0.0.2", 00:16:26.117 "trsvcid": "4420", 00:16:26.117 "trtype": "TCP" 00:16:26.117 } 00:16:26.117 ], 00:16:26.117 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:16:26.117 "subtype": "Discovery" 00:16:26.117 }, 00:16:26.117 { 00:16:26.117 "allow_any_host": true, 00:16:26.117 "hosts": [], 00:16:26.117 "listen_addresses": [ 00:16:26.117 { 00:16:26.117 "adrfam": "IPv4", 00:16:26.117 "traddr": "10.0.0.2", 00:16:26.117 "trsvcid": "4420", 00:16:26.117 "trtype": "TCP" 00:16:26.117 } 00:16:26.117 ], 00:16:26.117 "max_cntlid": 65519, 00:16:26.117 "max_namespaces": 32, 00:16:26.117 "min_cntlid": 1, 00:16:26.117 "model_number": "SPDK bdev Controller", 00:16:26.117 "namespaces": [ 00:16:26.117 { 00:16:26.117 "bdev_name": "Malloc0", 00:16:26.117 "eui64": "ABCDEF0123456789", 00:16:26.117 "name": "Malloc0", 00:16:26.117 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:16:26.117 "nsid": 1, 00:16:26.117 "uuid": "edac342d-f104-47df-9fda-129ab03c934f" 00:16:26.117 } 00:16:26.117 ], 00:16:26.117 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:16:26.117 "serial_number": "SPDK00000000000001", 00:16:26.117 "subtype": "NVMe" 00:16:26.117 } 00:16:26.117 ] 00:16:26.117 11:35:03 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:26.117 11:35:03 nvmf_tcp.nvmf_identify -- host/identify.sh@39 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:16:26.117 [2024-07-15 11:35:03.380738] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:16:26.117 [2024-07-15 11:35:03.380809] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86813 ] 00:16:26.117 [2024-07-15 11:35:03.528124] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:16:26.117 [2024-07-15 11:35:03.528219] nvme_tcp.c:2338:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:16:26.117 [2024-07-15 11:35:03.528227] nvme_tcp.c:2342:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:16:26.117 [2024-07-15 11:35:03.528242] nvme_tcp.c:2360:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:16:26.117 [2024-07-15 11:35:03.528251] sock.c: 337:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:16:26.117 [2024-07-15 11:35:03.528434] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:16:26.117 [2024-07-15 11:35:03.528492] nvme_tcp.c:1555:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x9aca60 0 00:16:26.117 [2024-07-15 11:35:03.532596] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:16:26.117 [2024-07-15 11:35:03.532642] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:16:26.117 [2024-07-15 11:35:03.532649] nvme_tcp.c:1601:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:16:26.117 [2024-07-15 11:35:03.532653] nvme_tcp.c:1602:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:16:26.117 [2024-07-15 11:35:03.532710] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:26.117 [2024-07-15 11:35:03.532719] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:26.117 [2024-07-15 11:35:03.532724] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x9aca60) 00:16:26.117 [2024-07-15 11:35:03.532743] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:16:26.117 [2024-07-15 11:35:03.532788] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9ef840, cid 0, qid 0 00:16:26.117 [2024-07-15 11:35:03.540600] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:26.117 [2024-07-15 11:35:03.540645] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:26.117 [2024-07-15 11:35:03.540651] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:26.117 [2024-07-15 11:35:03.540658] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9ef840) on tqpair=0x9aca60 00:16:26.117 [2024-07-15 11:35:03.540675] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:16:26.117 [2024-07-15 11:35:03.540691] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:16:26.117 [2024-07-15 11:35:03.540700] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:16:26.117 [2024-07-15 11:35:03.540733] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:26.117 [2024-07-15 11:35:03.540740] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:26.117 [2024-07-15 11:35:03.540745] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x9aca60) 00:16:26.117 [2024-07-15 11:35:03.540763] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.117 [2024-07-15 11:35:03.540820] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9ef840, cid 0, qid 0 00:16:26.117 [2024-07-15 11:35:03.540957] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:26.117 [2024-07-15 11:35:03.540964] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:26.117 [2024-07-15 11:35:03.540968] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:26.117 [2024-07-15 11:35:03.540973] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9ef840) on tqpair=0x9aca60 00:16:26.117 [2024-07-15 11:35:03.540980] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:16:26.117 [2024-07-15 11:35:03.540988] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:16:26.117 [2024-07-15 11:35:03.540997] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:26.117 [2024-07-15 11:35:03.541002] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:26.117 [2024-07-15 11:35:03.541006] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x9aca60) 00:16:26.117 [2024-07-15 11:35:03.541014] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.117 [2024-07-15 11:35:03.541034] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9ef840, cid 0, qid 0 00:16:26.117 [2024-07-15 11:35:03.541091] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:26.117 [2024-07-15 11:35:03.541097] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:26.117 [2024-07-15 11:35:03.541101] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:26.117 [2024-07-15 11:35:03.541106] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9ef840) on tqpair=0x9aca60 00:16:26.117 [2024-07-15 11:35:03.541113] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:16:26.117 [2024-07-15 11:35:03.541123] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:16:26.117 [2024-07-15 11:35:03.541131] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:26.117 [2024-07-15 11:35:03.541136] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:26.117 [2024-07-15 11:35:03.541140] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x9aca60) 00:16:26.117 [2024-07-15 11:35:03.541147] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.117 [2024-07-15 11:35:03.541166] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9ef840, cid 0, qid 0 00:16:26.117 [2024-07-15 11:35:03.541221] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:26.117 [2024-07-15 11:35:03.541228] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:26.117 [2024-07-15 11:35:03.541232] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:26.117 [2024-07-15 11:35:03.541237] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9ef840) on tqpair=0x9aca60 00:16:26.117 [2024-07-15 11:35:03.541243] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:16:26.117 [2024-07-15 11:35:03.541253] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:26.117 [2024-07-15 11:35:03.541258] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:26.117 [2024-07-15 11:35:03.541262] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x9aca60) 00:16:26.117 [2024-07-15 11:35:03.541270] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.117 [2024-07-15 11:35:03.541289] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9ef840, cid 0, qid 0 00:16:26.117 [2024-07-15 11:35:03.541344] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:26.117 [2024-07-15 11:35:03.541351] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:26.117 [2024-07-15 11:35:03.541355] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:26.117 [2024-07-15 11:35:03.541360] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9ef840) on tqpair=0x9aca60 00:16:26.117 [2024-07-15 11:35:03.541365] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:16:26.117 [2024-07-15 11:35:03.541370] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:16:26.117 [2024-07-15 11:35:03.541378] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:16:26.117 [2024-07-15 11:35:03.541485] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:16:26.117 [2024-07-15 11:35:03.541500] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:16:26.117 [2024-07-15 11:35:03.541512] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:26.117 [2024-07-15 11:35:03.541516] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:26.117 [2024-07-15 11:35:03.541521] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x9aca60) 00:16:26.117 [2024-07-15 11:35:03.541528] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.117 [2024-07-15 11:35:03.541562] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9ef840, cid 0, qid 0 00:16:26.117 [2024-07-15 11:35:03.541628] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:26.117 [2024-07-15 11:35:03.541635] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:26.117 [2024-07-15 11:35:03.541639] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:26.117 [2024-07-15 11:35:03.541643] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9ef840) on tqpair=0x9aca60 00:16:26.117 [2024-07-15 11:35:03.541649] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:16:26.117 [2024-07-15 11:35:03.541660] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:26.117 [2024-07-15 11:35:03.541664] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:26.117 [2024-07-15 11:35:03.541669] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x9aca60) 00:16:26.117 [2024-07-15 11:35:03.541677] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.117 [2024-07-15 11:35:03.541697] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9ef840, cid 0, qid 0 00:16:26.117 [2024-07-15 11:35:03.541753] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:26.117 [2024-07-15 11:35:03.541760] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:26.117 [2024-07-15 11:35:03.541763] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:26.117 [2024-07-15 11:35:03.541768] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9ef840) on tqpair=0x9aca60 00:16:26.117 [2024-07-15 11:35:03.541773] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:16:26.118 [2024-07-15 11:35:03.541778] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:16:26.118 [2024-07-15 11:35:03.541787] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:16:26.118 [2024-07-15 11:35:03.541812] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:16:26.118 [2024-07-15 11:35:03.541829] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:26.118 [2024-07-15 11:35:03.541834] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x9aca60) 00:16:26.118 [2024-07-15 11:35:03.541842] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.118 [2024-07-15 11:35:03.541863] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9ef840, cid 0, qid 0 00:16:26.118 [2024-07-15 11:35:03.541972] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:16:26.118 [2024-07-15 11:35:03.541980] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:16:26.118 [2024-07-15 11:35:03.541984] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:16:26.118 [2024-07-15 11:35:03.541988] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x9aca60): datao=0, datal=4096, cccid=0 00:16:26.118 [2024-07-15 11:35:03.541994] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x9ef840) on tqpair(0x9aca60): expected_datao=0, payload_size=4096 00:16:26.118 [2024-07-15 11:35:03.541999] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:26.118 [2024-07-15 11:35:03.542009] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:16:26.118 [2024-07-15 11:35:03.542014] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:16:26.118 [2024-07-15 11:35:03.542023] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:26.118 [2024-07-15 11:35:03.542030] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:26.118 [2024-07-15 11:35:03.542033] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:26.118 [2024-07-15 11:35:03.542038] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9ef840) on tqpair=0x9aca60 00:16:26.118 [2024-07-15 11:35:03.542049] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:16:26.118 [2024-07-15 11:35:03.542055] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:16:26.118 [2024-07-15 11:35:03.542060] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:16:26.118 [2024-07-15 11:35:03.542066] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:16:26.118 [2024-07-15 11:35:03.542071] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:16:26.118 [2024-07-15 11:35:03.542077] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:16:26.118 [2024-07-15 11:35:03.542087] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:16:26.118 [2024-07-15 11:35:03.542095] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:26.118 [2024-07-15 11:35:03.542100] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:26.118 [2024-07-15 11:35:03.542104] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x9aca60) 00:16:26.118 [2024-07-15 11:35:03.542113] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:16:26.118 [2024-07-15 11:35:03.542134] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9ef840, cid 0, qid 0 00:16:26.118 [2024-07-15 11:35:03.542198] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:26.118 [2024-07-15 11:35:03.542210] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:26.118 [2024-07-15 11:35:03.542215] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:26.118 [2024-07-15 11:35:03.542219] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9ef840) on tqpair=0x9aca60 00:16:26.118 [2024-07-15 11:35:03.542228] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:26.118 [2024-07-15 11:35:03.542233] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:26.118 [2024-07-15 11:35:03.542237] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x9aca60) 00:16:26.118 [2024-07-15 11:35:03.542244] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:16:26.118 [2024-07-15 11:35:03.542251] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:26.118 [2024-07-15 11:35:03.542255] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:26.118 [2024-07-15 11:35:03.542259] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x9aca60) 00:16:26.118 [2024-07-15 11:35:03.542266] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:16:26.118 [2024-07-15 11:35:03.542272] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:26.118 [2024-07-15 11:35:03.542276] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:26.118 [2024-07-15 11:35:03.542280] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x9aca60) 00:16:26.118 [2024-07-15 11:35:03.542287] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:16:26.118 [2024-07-15 11:35:03.542293] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:26.118 [2024-07-15 11:35:03.542297] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:26.118 [2024-07-15 11:35:03.542302] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x9aca60) 00:16:26.118 [2024-07-15 11:35:03.542308] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:16:26.118 [2024-07-15 11:35:03.542313] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:16:26.118 [2024-07-15 11:35:03.542327] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:16:26.118 [2024-07-15 11:35:03.542336] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:26.118 [2024-07-15 11:35:03.542340] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x9aca60) 00:16:26.118 [2024-07-15 11:35:03.542348] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.118 [2024-07-15 11:35:03.542370] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9ef840, cid 0, qid 0 00:16:26.118 [2024-07-15 11:35:03.542378] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9ef9c0, cid 1, qid 0 00:16:26.118 [2024-07-15 11:35:03.542383] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9efb40, cid 2, qid 0 00:16:26.118 [2024-07-15 11:35:03.542388] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9efcc0, cid 3, qid 0 00:16:26.118 [2024-07-15 11:35:03.542393] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9efe40, cid 4, qid 0 00:16:26.118 [2024-07-15 11:35:03.542491] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:26.118 [2024-07-15 11:35:03.542498] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:26.118 [2024-07-15 11:35:03.542502] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:26.118 [2024-07-15 11:35:03.542506] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9efe40) on tqpair=0x9aca60 00:16:26.118 [2024-07-15 11:35:03.542512] nvme_ctrlr.c:3022:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:16:26.118 [2024-07-15 11:35:03.542521] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:16:26.118 [2024-07-15 11:35:03.542534] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:26.118 [2024-07-15 11:35:03.542540] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x9aca60) 00:16:26.118 [2024-07-15 11:35:03.542559] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.118 [2024-07-15 11:35:03.542582] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9efe40, cid 4, qid 0 00:16:26.118 [2024-07-15 11:35:03.542656] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:16:26.118 [2024-07-15 11:35:03.542663] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:16:26.118 [2024-07-15 11:35:03.542666] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:16:26.118 [2024-07-15 11:35:03.542671] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x9aca60): datao=0, datal=4096, cccid=4 00:16:26.118 [2024-07-15 11:35:03.542676] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x9efe40) on tqpair(0x9aca60): expected_datao=0, payload_size=4096 00:16:26.118 [2024-07-15 11:35:03.542681] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:26.118 [2024-07-15 11:35:03.542688] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:16:26.118 [2024-07-15 11:35:03.542693] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:16:26.118 [2024-07-15 11:35:03.542701] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:26.118 [2024-07-15 11:35:03.542708] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:26.118 [2024-07-15 11:35:03.542712] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:26.118 [2024-07-15 11:35:03.542716] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9efe40) on tqpair=0x9aca60 00:16:26.118 [2024-07-15 11:35:03.542732] nvme_ctrlr.c:4160:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:16:26.118 [2024-07-15 11:35:03.542782] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:26.118 [2024-07-15 11:35:03.542789] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x9aca60) 00:16:26.118 [2024-07-15 11:35:03.542797] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.118 [2024-07-15 11:35:03.542805] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:26.118 [2024-07-15 11:35:03.542810] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:26.118 [2024-07-15 11:35:03.542814] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x9aca60) 00:16:26.118 [2024-07-15 11:35:03.542821] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:16:26.118 [2024-07-15 11:35:03.542849] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9efe40, cid 4, qid 0 00:16:26.118 [2024-07-15 11:35:03.542857] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9effc0, cid 5, qid 0 00:16:26.118 [2024-07-15 11:35:03.543020] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:16:26.118 [2024-07-15 11:35:03.543040] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:16:26.118 [2024-07-15 11:35:03.543045] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:16:26.118 [2024-07-15 11:35:03.543050] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x9aca60): datao=0, datal=1024, cccid=4 00:16:26.118 [2024-07-15 11:35:03.543055] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x9efe40) on tqpair(0x9aca60): expected_datao=0, payload_size=1024 00:16:26.118 [2024-07-15 11:35:03.543060] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:26.118 [2024-07-15 11:35:03.543068] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:16:26.118 [2024-07-15 11:35:03.543072] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:16:26.118 [2024-07-15 11:35:03.543078] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:26.118 [2024-07-15 11:35:03.543085] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:26.118 [2024-07-15 11:35:03.543088] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:26.118 [2024-07-15 11:35:03.543093] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9effc0) on tqpair=0x9aca60 00:16:26.118 [2024-07-15 11:35:03.583700] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:26.118 [2024-07-15 11:35:03.583761] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:26.119 [2024-07-15 11:35:03.583768] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:26.119 [2024-07-15 11:35:03.583775] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9efe40) on tqpair=0x9aca60 00:16:26.119 [2024-07-15 11:35:03.583813] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:26.119 [2024-07-15 11:35:03.583819] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x9aca60) 00:16:26.119 [2024-07-15 11:35:03.583839] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.119 [2024-07-15 11:35:03.583889] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9efe40, cid 4, qid 0 00:16:26.119 [2024-07-15 11:35:03.584034] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:16:26.119 [2024-07-15 11:35:03.584041] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:16:26.119 [2024-07-15 11:35:03.584045] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:16:26.119 [2024-07-15 11:35:03.584049] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x9aca60): datao=0, datal=3072, cccid=4 00:16:26.119 [2024-07-15 11:35:03.584055] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x9efe40) on tqpair(0x9aca60): expected_datao=0, payload_size=3072 00:16:26.119 [2024-07-15 11:35:03.584061] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:26.119 [2024-07-15 11:35:03.584073] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:16:26.119 [2024-07-15 11:35:03.584078] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:16:26.119 [2024-07-15 11:35:03.584088] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:26.119 [2024-07-15 11:35:03.584095] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:26.119 [2024-07-15 11:35:03.584099] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:26.119 [2024-07-15 11:35:03.584104] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9efe40) on tqpair=0x9aca60 00:16:26.119 [2024-07-15 11:35:03.584116] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:26.119 [2024-07-15 11:35:03.584121] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x9aca60) 00:16:26.119 [2024-07-15 11:35:03.584130] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.119 [2024-07-15 11:35:03.584158] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9efe40, cid 4, qid 0 00:16:26.119 [2024-07-15 11:35:03.584238] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:16:26.119 [2024-07-15 11:35:03.584245] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:16:26.119 [2024-07-15 11:35:03.584249] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:16:26.119 [2024-07-15 11:35:03.584253] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x9aca60): datao=0, datal=8, cccid=4 00:16:26.119 [2024-07-15 11:35:03.584258] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x9efe40) on tqpair(0x9aca60): expected_datao=0, payload_size=8 00:16:26.119 [2024-07-15 11:35:03.584263] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:26.119 [2024-07-15 11:35:03.584270] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:16:26.119 [2024-07-15 11:35:03.584274] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:16:26.381 [2024-07-15 11:35:03.628635] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:26.381 [2024-07-15 11:35:03.628686] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:26.381 [2024-07-15 11:35:03.628693] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:26.381 [2024-07-15 11:35:03.628701] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9efe40) on tqpair=0x9aca60 00:16:26.381 ===================================================== 00:16:26.381 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:16:26.381 ===================================================== 00:16:26.381 Controller Capabilities/Features 00:16:26.381 ================================ 00:16:26.381 Vendor ID: 0000 00:16:26.381 Subsystem Vendor ID: 0000 00:16:26.381 Serial Number: .................... 00:16:26.381 Model Number: ........................................ 00:16:26.381 Firmware Version: 24.09 00:16:26.381 Recommended Arb Burst: 0 00:16:26.381 IEEE OUI Identifier: 00 00 00 00:16:26.381 Multi-path I/O 00:16:26.381 May have multiple subsystem ports: No 00:16:26.381 May have multiple controllers: No 00:16:26.381 Associated with SR-IOV VF: No 00:16:26.381 Max Data Transfer Size: 131072 00:16:26.381 Max Number of Namespaces: 0 00:16:26.381 Max Number of I/O Queues: 1024 00:16:26.381 NVMe Specification Version (VS): 1.3 00:16:26.381 NVMe Specification Version (Identify): 1.3 00:16:26.381 Maximum Queue Entries: 128 00:16:26.381 Contiguous Queues Required: Yes 00:16:26.381 Arbitration Mechanisms Supported 00:16:26.381 Weighted Round Robin: Not Supported 00:16:26.381 Vendor Specific: Not Supported 00:16:26.381 Reset Timeout: 15000 ms 00:16:26.381 Doorbell Stride: 4 bytes 00:16:26.381 NVM Subsystem Reset: Not Supported 00:16:26.381 Command Sets Supported 00:16:26.381 NVM Command Set: Supported 00:16:26.381 Boot Partition: Not Supported 00:16:26.381 Memory Page Size Minimum: 4096 bytes 00:16:26.381 Memory Page Size Maximum: 4096 bytes 00:16:26.381 Persistent Memory Region: Not Supported 00:16:26.381 Optional Asynchronous Events Supported 00:16:26.381 Namespace Attribute Notices: Not Supported 00:16:26.381 Firmware Activation Notices: Not Supported 00:16:26.381 ANA Change Notices: Not Supported 00:16:26.381 PLE Aggregate Log Change Notices: Not Supported 00:16:26.381 LBA Status Info Alert Notices: Not Supported 00:16:26.381 EGE Aggregate Log Change Notices: Not Supported 00:16:26.381 Normal NVM Subsystem Shutdown event: Not Supported 00:16:26.381 Zone Descriptor Change Notices: Not Supported 00:16:26.381 Discovery Log Change Notices: Supported 00:16:26.381 Controller Attributes 00:16:26.381 128-bit Host Identifier: Not Supported 00:16:26.381 Non-Operational Permissive Mode: Not Supported 00:16:26.381 NVM Sets: Not Supported 00:16:26.381 Read Recovery Levels: Not Supported 00:16:26.381 Endurance Groups: Not Supported 00:16:26.381 Predictable Latency Mode: Not Supported 00:16:26.381 Traffic Based Keep ALive: Not Supported 00:16:26.381 Namespace Granularity: Not Supported 00:16:26.381 SQ Associations: Not Supported 00:16:26.381 UUID List: Not Supported 00:16:26.381 Multi-Domain Subsystem: Not Supported 00:16:26.381 Fixed Capacity Management: Not Supported 00:16:26.381 Variable Capacity Management: Not Supported 00:16:26.381 Delete Endurance Group: Not Supported 00:16:26.381 Delete NVM Set: Not Supported 00:16:26.381 Extended LBA Formats Supported: Not Supported 00:16:26.381 Flexible Data Placement Supported: Not Supported 00:16:26.381 00:16:26.381 Controller Memory Buffer Support 00:16:26.381 ================================ 00:16:26.381 Supported: No 00:16:26.381 00:16:26.381 Persistent Memory Region Support 00:16:26.381 ================================ 00:16:26.381 Supported: No 00:16:26.381 00:16:26.381 Admin Command Set Attributes 00:16:26.381 ============================ 00:16:26.381 Security Send/Receive: Not Supported 00:16:26.381 Format NVM: Not Supported 00:16:26.381 Firmware Activate/Download: Not Supported 00:16:26.381 Namespace Management: Not Supported 00:16:26.381 Device Self-Test: Not Supported 00:16:26.381 Directives: Not Supported 00:16:26.381 NVMe-MI: Not Supported 00:16:26.381 Virtualization Management: Not Supported 00:16:26.381 Doorbell Buffer Config: Not Supported 00:16:26.381 Get LBA Status Capability: Not Supported 00:16:26.381 Command & Feature Lockdown Capability: Not Supported 00:16:26.381 Abort Command Limit: 1 00:16:26.381 Async Event Request Limit: 4 00:16:26.381 Number of Firmware Slots: N/A 00:16:26.381 Firmware Slot 1 Read-Only: N/A 00:16:26.381 Firmware Activation Without Reset: N/A 00:16:26.381 Multiple Update Detection Support: N/A 00:16:26.381 Firmware Update Granularity: No Information Provided 00:16:26.381 Per-Namespace SMART Log: No 00:16:26.381 Asymmetric Namespace Access Log Page: Not Supported 00:16:26.381 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:16:26.381 Command Effects Log Page: Not Supported 00:16:26.381 Get Log Page Extended Data: Supported 00:16:26.381 Telemetry Log Pages: Not Supported 00:16:26.381 Persistent Event Log Pages: Not Supported 00:16:26.381 Supported Log Pages Log Page: May Support 00:16:26.381 Commands Supported & Effects Log Page: Not Supported 00:16:26.381 Feature Identifiers & Effects Log Page:May Support 00:16:26.381 NVMe-MI Commands & Effects Log Page: May Support 00:16:26.381 Data Area 4 for Telemetry Log: Not Supported 00:16:26.381 Error Log Page Entries Supported: 128 00:16:26.381 Keep Alive: Not Supported 00:16:26.381 00:16:26.381 NVM Command Set Attributes 00:16:26.381 ========================== 00:16:26.381 Submission Queue Entry Size 00:16:26.381 Max: 1 00:16:26.381 Min: 1 00:16:26.381 Completion Queue Entry Size 00:16:26.381 Max: 1 00:16:26.381 Min: 1 00:16:26.381 Number of Namespaces: 0 00:16:26.381 Compare Command: Not Supported 00:16:26.381 Write Uncorrectable Command: Not Supported 00:16:26.381 Dataset Management Command: Not Supported 00:16:26.381 Write Zeroes Command: Not Supported 00:16:26.381 Set Features Save Field: Not Supported 00:16:26.381 Reservations: Not Supported 00:16:26.381 Timestamp: Not Supported 00:16:26.381 Copy: Not Supported 00:16:26.381 Volatile Write Cache: Not Present 00:16:26.381 Atomic Write Unit (Normal): 1 00:16:26.381 Atomic Write Unit (PFail): 1 00:16:26.381 Atomic Compare & Write Unit: 1 00:16:26.382 Fused Compare & Write: Supported 00:16:26.382 Scatter-Gather List 00:16:26.382 SGL Command Set: Supported 00:16:26.382 SGL Keyed: Supported 00:16:26.382 SGL Bit Bucket Descriptor: Not Supported 00:16:26.382 SGL Metadata Pointer: Not Supported 00:16:26.382 Oversized SGL: Not Supported 00:16:26.382 SGL Metadata Address: Not Supported 00:16:26.382 SGL Offset: Supported 00:16:26.382 Transport SGL Data Block: Not Supported 00:16:26.382 Replay Protected Memory Block: Not Supported 00:16:26.382 00:16:26.382 Firmware Slot Information 00:16:26.382 ========================= 00:16:26.382 Active slot: 0 00:16:26.382 00:16:26.382 00:16:26.382 Error Log 00:16:26.382 ========= 00:16:26.382 00:16:26.382 Active Namespaces 00:16:26.382 ================= 00:16:26.382 Discovery Log Page 00:16:26.382 ================== 00:16:26.382 Generation Counter: 2 00:16:26.382 Number of Records: 2 00:16:26.382 Record Format: 0 00:16:26.382 00:16:26.382 Discovery Log Entry 0 00:16:26.382 ---------------------- 00:16:26.382 Transport Type: 3 (TCP) 00:16:26.382 Address Family: 1 (IPv4) 00:16:26.382 Subsystem Type: 3 (Current Discovery Subsystem) 00:16:26.382 Entry Flags: 00:16:26.382 Duplicate Returned Information: 1 00:16:26.382 Explicit Persistent Connection Support for Discovery: 1 00:16:26.382 Transport Requirements: 00:16:26.382 Secure Channel: Not Required 00:16:26.382 Port ID: 0 (0x0000) 00:16:26.382 Controller ID: 65535 (0xffff) 00:16:26.382 Admin Max SQ Size: 128 00:16:26.382 Transport Service Identifier: 4420 00:16:26.382 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:16:26.382 Transport Address: 10.0.0.2 00:16:26.382 Discovery Log Entry 1 00:16:26.382 ---------------------- 00:16:26.382 Transport Type: 3 (TCP) 00:16:26.382 Address Family: 1 (IPv4) 00:16:26.382 Subsystem Type: 2 (NVM Subsystem) 00:16:26.382 Entry Flags: 00:16:26.382 Duplicate Returned Information: 0 00:16:26.382 Explicit Persistent Connection Support for Discovery: 0 00:16:26.382 Transport Requirements: 00:16:26.382 Secure Channel: Not Required 00:16:26.382 Port ID: 0 (0x0000) 00:16:26.382 Controller ID: 65535 (0xffff) 00:16:26.382 Admin Max SQ Size: 128 00:16:26.382 Transport Service Identifier: 4420 00:16:26.382 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:16:26.382 Transport Address: 10.0.0.2 [2024-07-15 11:35:03.628873] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:16:26.382 [2024-07-15 11:35:03.628894] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9ef840) on tqpair=0x9aca60 00:16:26.382 [2024-07-15 11:35:03.628905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.382 [2024-07-15 11:35:03.628912] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9ef9c0) on tqpair=0x9aca60 00:16:26.382 [2024-07-15 11:35:03.628917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.382 [2024-07-15 11:35:03.628923] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9efb40) on tqpair=0x9aca60 00:16:26.382 [2024-07-15 11:35:03.628928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.382 [2024-07-15 11:35:03.628934] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9efcc0) on tqpair=0x9aca60 00:16:26.382 [2024-07-15 11:35:03.628939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.382 [2024-07-15 11:35:03.628956] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:26.382 [2024-07-15 11:35:03.628962] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:26.382 [2024-07-15 11:35:03.628966] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x9aca60) 00:16:26.382 [2024-07-15 11:35:03.628981] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.382 [2024-07-15 11:35:03.629021] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9efcc0, cid 3, qid 0 00:16:26.382 [2024-07-15 11:35:03.629141] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:26.382 [2024-07-15 11:35:03.629149] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:26.382 [2024-07-15 11:35:03.629153] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:26.382 [2024-07-15 11:35:03.629158] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9efcc0) on tqpair=0x9aca60 00:16:26.382 [2024-07-15 11:35:03.629167] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:26.382 [2024-07-15 11:35:03.629172] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:26.382 [2024-07-15 11:35:03.629176] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x9aca60) 00:16:26.382 [2024-07-15 11:35:03.629184] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.382 [2024-07-15 11:35:03.629209] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9efcc0, cid 3, qid 0 00:16:26.382 [2024-07-15 11:35:03.629302] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:26.382 [2024-07-15 11:35:03.629309] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:26.382 [2024-07-15 11:35:03.629312] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:26.382 [2024-07-15 11:35:03.629317] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9efcc0) on tqpair=0x9aca60 00:16:26.382 [2024-07-15 11:35:03.629322] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:16:26.382 [2024-07-15 11:35:03.629328] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:16:26.382 [2024-07-15 11:35:03.629339] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:26.382 [2024-07-15 11:35:03.629343] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:26.382 [2024-07-15 11:35:03.629347] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x9aca60) 00:16:26.382 [2024-07-15 11:35:03.629355] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.382 [2024-07-15 11:35:03.629374] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9efcc0, cid 3, qid 0 00:16:26.382 [2024-07-15 11:35:03.629433] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:26.382 [2024-07-15 11:35:03.629440] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:26.382 [2024-07-15 11:35:03.629444] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:26.382 [2024-07-15 11:35:03.629448] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9efcc0) on tqpair=0x9aca60 00:16:26.382 [2024-07-15 11:35:03.629460] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:26.382 [2024-07-15 11:35:03.629465] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:26.382 [2024-07-15 11:35:03.629469] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x9aca60) 00:16:26.382 [2024-07-15 11:35:03.629477] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.382 [2024-07-15 11:35:03.629495] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9efcc0, cid 3, qid 0 00:16:26.382 [2024-07-15 11:35:03.629567] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:26.382 [2024-07-15 11:35:03.629576] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:26.382 [2024-07-15 11:35:03.629580] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:26.382 [2024-07-15 11:35:03.629585] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9efcc0) on tqpair=0x9aca60 00:16:26.382 [2024-07-15 11:35:03.629596] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:26.382 [2024-07-15 11:35:03.629601] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:26.382 [2024-07-15 11:35:03.629605] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x9aca60) 00:16:26.382 [2024-07-15 11:35:03.629613] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.382 [2024-07-15 11:35:03.629633] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9efcc0, cid 3, qid 0 00:16:26.382 [2024-07-15 11:35:03.629693] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:26.382 [2024-07-15 11:35:03.629700] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:26.382 [2024-07-15 11:35:03.629704] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:26.382 [2024-07-15 11:35:03.629708] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9efcc0) on tqpair=0x9aca60 00:16:26.382 [2024-07-15 11:35:03.629719] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:26.382 [2024-07-15 11:35:03.629724] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:26.382 [2024-07-15 11:35:03.629728] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x9aca60) 00:16:26.382 [2024-07-15 11:35:03.629735] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.382 [2024-07-15 11:35:03.629753] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9efcc0, cid 3, qid 0 00:16:26.382 [2024-07-15 11:35:03.629826] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:26.382 [2024-07-15 11:35:03.629834] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:26.382 [2024-07-15 11:35:03.629838] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:26.382 [2024-07-15 11:35:03.629842] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9efcc0) on tqpair=0x9aca60 00:16:26.382 [2024-07-15 11:35:03.629853] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:26.382 [2024-07-15 11:35:03.629858] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:26.382 [2024-07-15 11:35:03.629862] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x9aca60) 00:16:26.382 [2024-07-15 11:35:03.629870] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.382 [2024-07-15 11:35:03.629889] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9efcc0, cid 3, qid 0 00:16:26.382 [2024-07-15 11:35:03.629956] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:26.382 [2024-07-15 11:35:03.629963] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:26.382 [2024-07-15 11:35:03.629967] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:26.382 [2024-07-15 11:35:03.629971] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9efcc0) on tqpair=0x9aca60 00:16:26.382 [2024-07-15 11:35:03.629982] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:26.382 [2024-07-15 11:35:03.629987] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:26.383 [2024-07-15 11:35:03.629991] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x9aca60) 00:16:26.383 [2024-07-15 11:35:03.629998] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.383 [2024-07-15 11:35:03.630016] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9efcc0, cid 3, qid 0 00:16:26.383 [2024-07-15 11:35:03.630072] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:26.383 [2024-07-15 11:35:03.630079] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:26.383 [2024-07-15 11:35:03.630083] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:26.383 [2024-07-15 11:35:03.630087] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9efcc0) on tqpair=0x9aca60 00:16:26.383 [2024-07-15 11:35:03.630098] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:26.383 [2024-07-15 11:35:03.630103] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:26.383 [2024-07-15 11:35:03.630107] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x9aca60) 00:16:26.383 [2024-07-15 11:35:03.630114] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.383 [2024-07-15 11:35:03.630132] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9efcc0, cid 3, qid 0 00:16:26.383 [2024-07-15 11:35:03.630188] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:26.383 [2024-07-15 11:35:03.630195] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:26.383 [2024-07-15 11:35:03.630198] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:26.383 [2024-07-15 11:35:03.630203] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9efcc0) on tqpair=0x9aca60 00:16:26.383 [2024-07-15 11:35:03.630214] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:26.383 [2024-07-15 11:35:03.630218] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:26.383 [2024-07-15 11:35:03.630222] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x9aca60) 00:16:26.383 [2024-07-15 11:35:03.630230] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.383 [2024-07-15 11:35:03.630248] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9efcc0, cid 3, qid 0 00:16:26.383 [2024-07-15 11:35:03.630304] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:26.383 [2024-07-15 11:35:03.630311] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:26.383 [2024-07-15 11:35:03.630314] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:26.383 [2024-07-15 11:35:03.630319] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9efcc0) on tqpair=0x9aca60 00:16:26.383 [2024-07-15 11:35:03.630329] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:26.383 [2024-07-15 11:35:03.630334] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:26.383 [2024-07-15 11:35:03.630338] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x9aca60) 00:16:26.383 [2024-07-15 11:35:03.630346] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.383 [2024-07-15 11:35:03.630363] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9efcc0, cid 3, qid 0 00:16:26.383 [2024-07-15 11:35:03.630420] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:26.383 [2024-07-15 11:35:03.630436] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:26.383 [2024-07-15 11:35:03.630441] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:26.383 [2024-07-15 11:35:03.630445] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9efcc0) on tqpair=0x9aca60 00:16:26.383 [2024-07-15 11:35:03.630457] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:26.383 [2024-07-15 11:35:03.630462] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:26.383 [2024-07-15 11:35:03.630466] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x9aca60) 00:16:26.383 [2024-07-15 11:35:03.630473] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.383 [2024-07-15 11:35:03.630493] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9efcc0, cid 3, qid 0 00:16:26.383 [2024-07-15 11:35:03.630558] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:26.383 [2024-07-15 11:35:03.630572] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:26.383 [2024-07-15 11:35:03.630577] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:26.383 [2024-07-15 11:35:03.630581] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9efcc0) on tqpair=0x9aca60 00:16:26.383 [2024-07-15 11:35:03.630593] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:26.383 [2024-07-15 11:35:03.630598] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:26.383 [2024-07-15 11:35:03.630602] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x9aca60) 00:16:26.383 [2024-07-15 11:35:03.630610] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.383 [2024-07-15 11:35:03.630630] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9efcc0, cid 3, qid 0 00:16:26.383 [2024-07-15 11:35:03.630689] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:26.383 [2024-07-15 11:35:03.630696] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:26.383 [2024-07-15 11:35:03.630699] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:26.383 [2024-07-15 11:35:03.630704] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9efcc0) on tqpair=0x9aca60 00:16:26.383 [2024-07-15 11:35:03.630715] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:26.383 [2024-07-15 11:35:03.630719] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:26.383 [2024-07-15 11:35:03.630723] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x9aca60) 00:16:26.383 [2024-07-15 11:35:03.630731] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.383 [2024-07-15 11:35:03.630749] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9efcc0, cid 3, qid 0 00:16:26.383 [2024-07-15 11:35:03.630808] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:26.383 [2024-07-15 11:35:03.630815] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:26.383 [2024-07-15 11:35:03.630819] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:26.383 [2024-07-15 11:35:03.630823] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9efcc0) on tqpair=0x9aca60 00:16:26.383 [2024-07-15 11:35:03.630834] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:26.383 [2024-07-15 11:35:03.630839] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:26.383 [2024-07-15 11:35:03.630843] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x9aca60) 00:16:26.383 [2024-07-15 11:35:03.630850] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.383 [2024-07-15 11:35:03.630868] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9efcc0, cid 3, qid 0 00:16:26.383 [2024-07-15 11:35:03.630925] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:26.383 [2024-07-15 11:35:03.630936] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:26.383 [2024-07-15 11:35:03.630940] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:26.383 [2024-07-15 11:35:03.630945] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9efcc0) on tqpair=0x9aca60 00:16:26.383 [2024-07-15 11:35:03.630956] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:26.383 [2024-07-15 11:35:03.630961] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:26.383 [2024-07-15 11:35:03.630965] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x9aca60) 00:16:26.383 [2024-07-15 11:35:03.630973] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.383 [2024-07-15 11:35:03.630992] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9efcc0, cid 3, qid 0 00:16:26.383 [2024-07-15 11:35:03.631046] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:26.383 [2024-07-15 11:35:03.631053] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:26.383 [2024-07-15 11:35:03.631056] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:26.383 [2024-07-15 11:35:03.631061] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9efcc0) on tqpair=0x9aca60 00:16:26.383 [2024-07-15 11:35:03.631071] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:26.383 [2024-07-15 11:35:03.631076] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:26.383 [2024-07-15 11:35:03.631080] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x9aca60) 00:16:26.383 [2024-07-15 11:35:03.631088] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.383 [2024-07-15 11:35:03.631105] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9efcc0, cid 3, qid 0 00:16:26.383 [2024-07-15 11:35:03.631161] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:26.383 [2024-07-15 11:35:03.631168] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:26.383 [2024-07-15 11:35:03.631172] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:26.383 [2024-07-15 11:35:03.631176] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9efcc0) on tqpair=0x9aca60 00:16:26.383 [2024-07-15 11:35:03.631187] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:26.383 [2024-07-15 11:35:03.631192] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:26.383 [2024-07-15 11:35:03.631196] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x9aca60) 00:16:26.383 [2024-07-15 11:35:03.631203] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.383 [2024-07-15 11:35:03.631221] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9efcc0, cid 3, qid 0 00:16:26.383 [2024-07-15 11:35:03.631274] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:26.383 [2024-07-15 11:35:03.631285] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:26.383 [2024-07-15 11:35:03.631289] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:26.383 [2024-07-15 11:35:03.631293] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9efcc0) on tqpair=0x9aca60 00:16:26.383 [2024-07-15 11:35:03.631305] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:26.383 [2024-07-15 11:35:03.631309] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:26.383 [2024-07-15 11:35:03.631313] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x9aca60) 00:16:26.383 [2024-07-15 11:35:03.631321] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.383 [2024-07-15 11:35:03.631340] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9efcc0, cid 3, qid 0 00:16:26.383 [2024-07-15 11:35:03.631392] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:26.383 [2024-07-15 11:35:03.631400] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:26.383 [2024-07-15 11:35:03.631404] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:26.383 [2024-07-15 11:35:03.631408] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9efcc0) on tqpair=0x9aca60 00:16:26.383 [2024-07-15 11:35:03.631419] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:26.383 [2024-07-15 11:35:03.631424] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:26.383 [2024-07-15 11:35:03.631428] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x9aca60) 00:16:26.383 [2024-07-15 11:35:03.631436] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.383 [2024-07-15 11:35:03.631454] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9efcc0, cid 3, qid 0 00:16:26.383 [2024-07-15 11:35:03.631508] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:26.383 [2024-07-15 11:35:03.631515] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:26.384 [2024-07-15 11:35:03.631519] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:26.384 [2024-07-15 11:35:03.631523] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9efcc0) on tqpair=0x9aca60 00:16:26.384 [2024-07-15 11:35:03.631534] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:26.384 [2024-07-15 11:35:03.631539] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:26.384 [2024-07-15 11:35:03.631542] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x9aca60) 00:16:26.384 [2024-07-15 11:35:03.631561] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.384 [2024-07-15 11:35:03.631581] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9efcc0, cid 3, qid 0 00:16:26.384 [2024-07-15 11:35:03.631641] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:26.384 [2024-07-15 11:35:03.631649] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:26.384 [2024-07-15 11:35:03.631653] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:26.384 [2024-07-15 11:35:03.631657] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9efcc0) on tqpair=0x9aca60 00:16:26.384 [2024-07-15 11:35:03.631668] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:26.384 [2024-07-15 11:35:03.631673] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:26.384 [2024-07-15 11:35:03.631682] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x9aca60) 00:16:26.384 [2024-07-15 11:35:03.631690] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.384 [2024-07-15 11:35:03.631713] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9efcc0, cid 3, qid 0 00:16:26.384 [2024-07-15 11:35:03.631769] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:26.384 [2024-07-15 11:35:03.631776] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:26.384 [2024-07-15 11:35:03.631779] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:26.384 [2024-07-15 11:35:03.631784] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9efcc0) on tqpair=0x9aca60 00:16:26.384 [2024-07-15 11:35:03.631794] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:26.384 [2024-07-15 11:35:03.631799] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:26.384 [2024-07-15 11:35:03.631803] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x9aca60) 00:16:26.384 [2024-07-15 11:35:03.631811] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.384 [2024-07-15 11:35:03.631829] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9efcc0, cid 3, qid 0 00:16:26.384 [2024-07-15 11:35:03.631884] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:26.384 [2024-07-15 11:35:03.631891] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:26.384 [2024-07-15 11:35:03.631895] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:26.384 [2024-07-15 11:35:03.631899] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9efcc0) on tqpair=0x9aca60 00:16:26.384 [2024-07-15 11:35:03.631910] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:26.384 [2024-07-15 11:35:03.631915] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:26.384 [2024-07-15 11:35:03.631919] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x9aca60) 00:16:26.384 [2024-07-15 11:35:03.631926] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.384 [2024-07-15 11:35:03.631944] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9efcc0, cid 3, qid 0 00:16:26.384 [2024-07-15 11:35:03.631997] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:26.384 [2024-07-15 11:35:03.632004] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:26.384 [2024-07-15 11:35:03.632007] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:26.384 [2024-07-15 11:35:03.632012] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9efcc0) on tqpair=0x9aca60 00:16:26.384 [2024-07-15 11:35:03.632022] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:26.384 [2024-07-15 11:35:03.632027] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:26.384 [2024-07-15 11:35:03.632031] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x9aca60) 00:16:26.384 [2024-07-15 11:35:03.632039] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.384 [2024-07-15 11:35:03.632056] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9efcc0, cid 3, qid 0 00:16:26.384 [2024-07-15 11:35:03.632112] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:26.384 [2024-07-15 11:35:03.632119] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:26.384 [2024-07-15 11:35:03.632123] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:26.384 [2024-07-15 11:35:03.632127] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9efcc0) on tqpair=0x9aca60 00:16:26.384 [2024-07-15 11:35:03.632138] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:26.384 [2024-07-15 11:35:03.632142] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:26.384 [2024-07-15 11:35:03.632146] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x9aca60) 00:16:26.384 [2024-07-15 11:35:03.632154] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.384 [2024-07-15 11:35:03.632172] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9efcc0, cid 3, qid 0 00:16:26.384 [2024-07-15 11:35:03.632227] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:26.384 [2024-07-15 11:35:03.632239] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:26.384 [2024-07-15 11:35:03.632244] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:26.384 [2024-07-15 11:35:03.632248] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9efcc0) on tqpair=0x9aca60 00:16:26.384 [2024-07-15 11:35:03.632259] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:26.384 [2024-07-15 11:35:03.632264] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:26.384 [2024-07-15 11:35:03.632268] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x9aca60) 00:16:26.384 [2024-07-15 11:35:03.632276] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.384 [2024-07-15 11:35:03.632295] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9efcc0, cid 3, qid 0 00:16:26.384 [2024-07-15 11:35:03.632349] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:26.384 [2024-07-15 11:35:03.632356] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:26.384 [2024-07-15 11:35:03.632360] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:26.384 [2024-07-15 11:35:03.632364] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9efcc0) on tqpair=0x9aca60 00:16:26.384 [2024-07-15 11:35:03.632375] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:26.384 [2024-07-15 11:35:03.632379] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:26.384 [2024-07-15 11:35:03.632383] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x9aca60) 00:16:26.384 [2024-07-15 11:35:03.632391] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.384 [2024-07-15 11:35:03.632409] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9efcc0, cid 3, qid 0 00:16:26.384 [2024-07-15 11:35:03.632465] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:26.384 [2024-07-15 11:35:03.632472] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:26.384 [2024-07-15 11:35:03.632476] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:26.384 [2024-07-15 11:35:03.632480] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9efcc0) on tqpair=0x9aca60 00:16:26.384 [2024-07-15 11:35:03.632491] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:26.384 [2024-07-15 11:35:03.632495] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:26.384 [2024-07-15 11:35:03.632500] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x9aca60) 00:16:26.384 [2024-07-15 11:35:03.632507] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.384 [2024-07-15 11:35:03.632525] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9efcc0, cid 3, qid 0 00:16:26.384 [2024-07-15 11:35:03.636575] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:26.384 [2024-07-15 11:35:03.636601] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:26.384 [2024-07-15 11:35:03.636607] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:26.384 [2024-07-15 11:35:03.636613] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9efcc0) on tqpair=0x9aca60 00:16:26.384 [2024-07-15 11:35:03.636633] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:26.384 [2024-07-15 11:35:03.636639] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:26.384 [2024-07-15 11:35:03.636643] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x9aca60) 00:16:26.384 [2024-07-15 11:35:03.636654] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.384 [2024-07-15 11:35:03.636684] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9efcc0, cid 3, qid 0 00:16:26.384 [2024-07-15 11:35:03.636754] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:26.384 [2024-07-15 11:35:03.636761] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:26.384 [2024-07-15 11:35:03.636765] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:26.384 [2024-07-15 11:35:03.636769] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9efcc0) on tqpair=0x9aca60 00:16:26.384 [2024-07-15 11:35:03.636778] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 7 milliseconds 00:16:26.384 00:16:26.384 11:35:03 nvmf_tcp.nvmf_identify -- host/identify.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:16:26.384 [2024-07-15 11:35:03.675973] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:16:26.384 [2024-07-15 11:35:03.676043] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86822 ] 00:16:26.384 [2024-07-15 11:35:03.819917] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:16:26.384 [2024-07-15 11:35:03.820002] nvme_tcp.c:2338:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:16:26.384 [2024-07-15 11:35:03.820010] nvme_tcp.c:2342:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:16:26.384 [2024-07-15 11:35:03.820024] nvme_tcp.c:2360:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:16:26.384 [2024-07-15 11:35:03.820032] sock.c: 337:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:16:26.384 [2024-07-15 11:35:03.820183] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:16:26.384 [2024-07-15 11:35:03.820236] nvme_tcp.c:1555:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x1e68a60 0 00:16:26.384 [2024-07-15 11:35:03.832572] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:16:26.384 [2024-07-15 11:35:03.832605] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:16:26.384 [2024-07-15 11:35:03.832611] nvme_tcp.c:1601:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:16:26.384 [2024-07-15 11:35:03.832616] nvme_tcp.c:1602:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:16:26.384 [2024-07-15 11:35:03.832665] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:26.384 [2024-07-15 11:35:03.832673] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:26.385 [2024-07-15 11:35:03.832677] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1e68a60) 00:16:26.385 [2024-07-15 11:35:03.832694] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:16:26.385 [2024-07-15 11:35:03.832729] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1eab840, cid 0, qid 0 00:16:26.385 [2024-07-15 11:35:03.840578] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:26.385 [2024-07-15 11:35:03.840625] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:26.385 [2024-07-15 11:35:03.840632] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:26.385 [2024-07-15 11:35:03.840638] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1eab840) on tqpair=0x1e68a60 00:16:26.385 [2024-07-15 11:35:03.840654] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:16:26.385 [2024-07-15 11:35:03.840664] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:16:26.385 [2024-07-15 11:35:03.840672] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:16:26.385 [2024-07-15 11:35:03.840694] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:26.385 [2024-07-15 11:35:03.840701] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:26.385 [2024-07-15 11:35:03.840705] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1e68a60) 00:16:26.385 [2024-07-15 11:35:03.840717] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.385 [2024-07-15 11:35:03.840751] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1eab840, cid 0, qid 0 00:16:26.385 [2024-07-15 11:35:03.840842] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:26.385 [2024-07-15 11:35:03.840849] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:26.385 [2024-07-15 11:35:03.840853] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:26.385 [2024-07-15 11:35:03.840857] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1eab840) on tqpair=0x1e68a60 00:16:26.385 [2024-07-15 11:35:03.840863] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:16:26.385 [2024-07-15 11:35:03.840872] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:16:26.385 [2024-07-15 11:35:03.840880] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:26.385 [2024-07-15 11:35:03.840885] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:26.385 [2024-07-15 11:35:03.840889] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1e68a60) 00:16:26.385 [2024-07-15 11:35:03.840897] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.385 [2024-07-15 11:35:03.840917] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1eab840, cid 0, qid 0 00:16:26.385 [2024-07-15 11:35:03.840977] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:26.385 [2024-07-15 11:35:03.840984] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:26.385 [2024-07-15 11:35:03.840988] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:26.385 [2024-07-15 11:35:03.840992] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1eab840) on tqpair=0x1e68a60 00:16:26.385 [2024-07-15 11:35:03.840999] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:16:26.385 [2024-07-15 11:35:03.841008] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:16:26.385 [2024-07-15 11:35:03.841016] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:26.385 [2024-07-15 11:35:03.841021] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:26.385 [2024-07-15 11:35:03.841025] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1e68a60) 00:16:26.385 [2024-07-15 11:35:03.841033] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.385 [2024-07-15 11:35:03.841052] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1eab840, cid 0, qid 0 00:16:26.385 [2024-07-15 11:35:03.841107] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:26.385 [2024-07-15 11:35:03.841114] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:26.385 [2024-07-15 11:35:03.841118] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:26.385 [2024-07-15 11:35:03.841122] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1eab840) on tqpair=0x1e68a60 00:16:26.385 [2024-07-15 11:35:03.841128] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:16:26.385 [2024-07-15 11:35:03.841139] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:26.385 [2024-07-15 11:35:03.841144] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:26.385 [2024-07-15 11:35:03.841148] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1e68a60) 00:16:26.385 [2024-07-15 11:35:03.841156] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.385 [2024-07-15 11:35:03.841175] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1eab840, cid 0, qid 0 00:16:26.385 [2024-07-15 11:35:03.841235] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:26.385 [2024-07-15 11:35:03.841242] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:26.385 [2024-07-15 11:35:03.841245] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:26.385 [2024-07-15 11:35:03.841250] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1eab840) on tqpair=0x1e68a60 00:16:26.385 [2024-07-15 11:35:03.841256] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:16:26.385 [2024-07-15 11:35:03.841261] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:16:26.385 [2024-07-15 11:35:03.841270] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:16:26.385 [2024-07-15 11:35:03.841376] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:16:26.385 [2024-07-15 11:35:03.841390] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:16:26.385 [2024-07-15 11:35:03.841401] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:26.385 [2024-07-15 11:35:03.841406] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:26.385 [2024-07-15 11:35:03.841410] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1e68a60) 00:16:26.385 [2024-07-15 11:35:03.841418] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.385 [2024-07-15 11:35:03.841440] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1eab840, cid 0, qid 0 00:16:26.385 [2024-07-15 11:35:03.841498] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:26.385 [2024-07-15 11:35:03.841505] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:26.385 [2024-07-15 11:35:03.841509] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:26.385 [2024-07-15 11:35:03.841513] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1eab840) on tqpair=0x1e68a60 00:16:26.385 [2024-07-15 11:35:03.841519] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:16:26.385 [2024-07-15 11:35:03.841529] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:26.385 [2024-07-15 11:35:03.841534] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:26.385 [2024-07-15 11:35:03.841538] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1e68a60) 00:16:26.385 [2024-07-15 11:35:03.841559] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.385 [2024-07-15 11:35:03.841582] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1eab840, cid 0, qid 0 00:16:26.385 [2024-07-15 11:35:03.841642] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:26.385 [2024-07-15 11:35:03.841649] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:26.385 [2024-07-15 11:35:03.841653] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:26.385 [2024-07-15 11:35:03.841658] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1eab840) on tqpair=0x1e68a60 00:16:26.385 [2024-07-15 11:35:03.841663] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:16:26.385 [2024-07-15 11:35:03.841668] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:16:26.385 [2024-07-15 11:35:03.841677] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:16:26.385 [2024-07-15 11:35:03.841688] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:16:26.385 [2024-07-15 11:35:03.841701] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:26.385 [2024-07-15 11:35:03.841705] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1e68a60) 00:16:26.385 [2024-07-15 11:35:03.841714] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.385 [2024-07-15 11:35:03.841734] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1eab840, cid 0, qid 0 00:16:26.385 [2024-07-15 11:35:03.841848] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:16:26.385 [2024-07-15 11:35:03.841857] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:16:26.385 [2024-07-15 11:35:03.841861] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:16:26.386 [2024-07-15 11:35:03.841866] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1e68a60): datao=0, datal=4096, cccid=0 00:16:26.386 [2024-07-15 11:35:03.841871] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1eab840) on tqpair(0x1e68a60): expected_datao=0, payload_size=4096 00:16:26.386 [2024-07-15 11:35:03.841877] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:26.386 [2024-07-15 11:35:03.841887] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:16:26.386 [2024-07-15 11:35:03.841892] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:16:26.386 [2024-07-15 11:35:03.841901] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:26.386 [2024-07-15 11:35:03.841907] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:26.386 [2024-07-15 11:35:03.841911] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:26.386 [2024-07-15 11:35:03.841916] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1eab840) on tqpair=0x1e68a60 00:16:26.386 [2024-07-15 11:35:03.841926] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:16:26.386 [2024-07-15 11:35:03.841932] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:16:26.386 [2024-07-15 11:35:03.841937] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:16:26.386 [2024-07-15 11:35:03.841942] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:16:26.386 [2024-07-15 11:35:03.841947] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:16:26.386 [2024-07-15 11:35:03.841952] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:16:26.386 [2024-07-15 11:35:03.841962] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:16:26.386 [2024-07-15 11:35:03.841970] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:26.386 [2024-07-15 11:35:03.841975] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:26.386 [2024-07-15 11:35:03.841979] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1e68a60) 00:16:26.386 [2024-07-15 11:35:03.841987] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:16:26.386 [2024-07-15 11:35:03.842009] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1eab840, cid 0, qid 0 00:16:26.386 [2024-07-15 11:35:03.842076] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:26.386 [2024-07-15 11:35:03.842088] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:26.386 [2024-07-15 11:35:03.842093] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:26.386 [2024-07-15 11:35:03.842098] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1eab840) on tqpair=0x1e68a60 00:16:26.386 [2024-07-15 11:35:03.842107] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:26.386 [2024-07-15 11:35:03.842112] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:26.386 [2024-07-15 11:35:03.842116] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1e68a60) 00:16:26.386 [2024-07-15 11:35:03.842123] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:16:26.386 [2024-07-15 11:35:03.842130] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:26.386 [2024-07-15 11:35:03.842134] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:26.386 [2024-07-15 11:35:03.842139] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x1e68a60) 00:16:26.386 [2024-07-15 11:35:03.842145] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:16:26.386 [2024-07-15 11:35:03.842152] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:26.386 [2024-07-15 11:35:03.842156] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:26.386 [2024-07-15 11:35:03.842160] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x1e68a60) 00:16:26.386 [2024-07-15 11:35:03.842166] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:16:26.386 [2024-07-15 11:35:03.842173] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:26.386 [2024-07-15 11:35:03.842177] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:26.386 [2024-07-15 11:35:03.842181] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e68a60) 00:16:26.386 [2024-07-15 11:35:03.842188] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:16:26.386 [2024-07-15 11:35:03.842193] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:16:26.386 [2024-07-15 11:35:03.842208] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:16:26.386 [2024-07-15 11:35:03.842216] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:26.386 [2024-07-15 11:35:03.842221] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1e68a60) 00:16:26.386 [2024-07-15 11:35:03.842228] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.386 [2024-07-15 11:35:03.842251] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1eab840, cid 0, qid 0 00:16:26.386 [2024-07-15 11:35:03.842259] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1eab9c0, cid 1, qid 0 00:16:26.386 [2024-07-15 11:35:03.842264] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1eabb40, cid 2, qid 0 00:16:26.386 [2024-07-15 11:35:03.842269] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1eabcc0, cid 3, qid 0 00:16:26.386 [2024-07-15 11:35:03.842274] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1eabe40, cid 4, qid 0 00:16:26.386 [2024-07-15 11:35:03.842373] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:26.386 [2024-07-15 11:35:03.842380] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:26.386 [2024-07-15 11:35:03.842384] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:26.386 [2024-07-15 11:35:03.842389] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1eabe40) on tqpair=0x1e68a60 00:16:26.386 [2024-07-15 11:35:03.842395] nvme_ctrlr.c:3022:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:16:26.386 [2024-07-15 11:35:03.842405] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:16:26.386 [2024-07-15 11:35:03.842415] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:16:26.386 [2024-07-15 11:35:03.842423] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:16:26.386 [2024-07-15 11:35:03.842430] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:26.386 [2024-07-15 11:35:03.842435] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:26.386 [2024-07-15 11:35:03.842439] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1e68a60) 00:16:26.386 [2024-07-15 11:35:03.842447] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:16:26.386 [2024-07-15 11:35:03.842467] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1eabe40, cid 4, qid 0 00:16:26.386 [2024-07-15 11:35:03.842524] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:26.386 [2024-07-15 11:35:03.842531] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:26.386 [2024-07-15 11:35:03.842535] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:26.386 [2024-07-15 11:35:03.842539] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1eabe40) on tqpair=0x1e68a60 00:16:26.386 [2024-07-15 11:35:03.842621] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:16:26.386 [2024-07-15 11:35:03.842635] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:16:26.386 [2024-07-15 11:35:03.842644] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:26.386 [2024-07-15 11:35:03.842649] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1e68a60) 00:16:26.386 [2024-07-15 11:35:03.842657] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.386 [2024-07-15 11:35:03.842679] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1eabe40, cid 4, qid 0 00:16:26.386 [2024-07-15 11:35:03.842752] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:16:26.386 [2024-07-15 11:35:03.842759] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:16:26.386 [2024-07-15 11:35:03.842763] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:16:26.386 [2024-07-15 11:35:03.842767] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1e68a60): datao=0, datal=4096, cccid=4 00:16:26.386 [2024-07-15 11:35:03.842772] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1eabe40) on tqpair(0x1e68a60): expected_datao=0, payload_size=4096 00:16:26.386 [2024-07-15 11:35:03.842777] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:26.386 [2024-07-15 11:35:03.842785] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:16:26.386 [2024-07-15 11:35:03.842790] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:16:26.386 [2024-07-15 11:35:03.842798] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:26.386 [2024-07-15 11:35:03.842805] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:26.386 [2024-07-15 11:35:03.842809] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:26.386 [2024-07-15 11:35:03.842813] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1eabe40) on tqpair=0x1e68a60 00:16:26.386 [2024-07-15 11:35:03.842830] nvme_ctrlr.c:4693:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:16:26.386 [2024-07-15 11:35:03.842841] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:16:26.386 [2024-07-15 11:35:03.842853] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:16:26.386 [2024-07-15 11:35:03.842861] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:26.386 [2024-07-15 11:35:03.842866] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1e68a60) 00:16:26.386 [2024-07-15 11:35:03.842874] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.386 [2024-07-15 11:35:03.842895] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1eabe40, cid 4, qid 0 00:16:26.386 [2024-07-15 11:35:03.842979] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:16:26.386 [2024-07-15 11:35:03.842992] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:16:26.386 [2024-07-15 11:35:03.842996] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:16:26.386 [2024-07-15 11:35:03.843001] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1e68a60): datao=0, datal=4096, cccid=4 00:16:26.386 [2024-07-15 11:35:03.843006] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1eabe40) on tqpair(0x1e68a60): expected_datao=0, payload_size=4096 00:16:26.386 [2024-07-15 11:35:03.843011] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:26.386 [2024-07-15 11:35:03.843019] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:16:26.386 [2024-07-15 11:35:03.843023] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:16:26.386 [2024-07-15 11:35:03.843032] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:26.386 [2024-07-15 11:35:03.843039] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:26.386 [2024-07-15 11:35:03.843043] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:26.386 [2024-07-15 11:35:03.843047] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1eabe40) on tqpair=0x1e68a60 00:16:26.386 [2024-07-15 11:35:03.843063] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:16:26.387 [2024-07-15 11:35:03.843075] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:16:26.387 [2024-07-15 11:35:03.843085] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:26.387 [2024-07-15 11:35:03.843089] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1e68a60) 00:16:26.387 [2024-07-15 11:35:03.843097] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.387 [2024-07-15 11:35:03.843119] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1eabe40, cid 4, qid 0 00:16:26.387 [2024-07-15 11:35:03.843185] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:16:26.387 [2024-07-15 11:35:03.843192] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:16:26.387 [2024-07-15 11:35:03.843196] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:16:26.387 [2024-07-15 11:35:03.843200] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1e68a60): datao=0, datal=4096, cccid=4 00:16:26.387 [2024-07-15 11:35:03.843205] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1eabe40) on tqpair(0x1e68a60): expected_datao=0, payload_size=4096 00:16:26.387 [2024-07-15 11:35:03.843210] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:26.387 [2024-07-15 11:35:03.843218] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:16:26.387 [2024-07-15 11:35:03.843222] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:16:26.387 [2024-07-15 11:35:03.843231] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:26.387 [2024-07-15 11:35:03.843237] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:26.387 [2024-07-15 11:35:03.843241] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:26.387 [2024-07-15 11:35:03.843245] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1eabe40) on tqpair=0x1e68a60 00:16:26.387 [2024-07-15 11:35:03.843255] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:16:26.387 [2024-07-15 11:35:03.843264] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:16:26.387 [2024-07-15 11:35:03.843275] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:16:26.387 [2024-07-15 11:35:03.843282] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host behavior support feature (timeout 30000 ms) 00:16:26.387 [2024-07-15 11:35:03.843288] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:16:26.387 [2024-07-15 11:35:03.843294] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:16:26.387 [2024-07-15 11:35:03.843300] nvme_ctrlr.c:3110:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:16:26.387 [2024-07-15 11:35:03.843305] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:16:26.387 [2024-07-15 11:35:03.843311] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:16:26.387 [2024-07-15 11:35:03.843332] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:26.387 [2024-07-15 11:35:03.843337] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1e68a60) 00:16:26.387 [2024-07-15 11:35:03.843345] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.387 [2024-07-15 11:35:03.843353] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:26.387 [2024-07-15 11:35:03.843357] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:26.387 [2024-07-15 11:35:03.843361] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1e68a60) 00:16:26.387 [2024-07-15 11:35:03.843368] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:16:26.387 [2024-07-15 11:35:03.843394] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1eabe40, cid 4, qid 0 00:16:26.387 [2024-07-15 11:35:03.843402] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1eabfc0, cid 5, qid 0 00:16:26.387 [2024-07-15 11:35:03.843476] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:26.387 [2024-07-15 11:35:03.843483] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:26.387 [2024-07-15 11:35:03.843487] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:26.387 [2024-07-15 11:35:03.843492] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1eabe40) on tqpair=0x1e68a60 00:16:26.387 [2024-07-15 11:35:03.843499] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:26.387 [2024-07-15 11:35:03.843505] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:26.387 [2024-07-15 11:35:03.843509] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:26.387 [2024-07-15 11:35:03.843513] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1eabfc0) on tqpair=0x1e68a60 00:16:26.387 [2024-07-15 11:35:03.843524] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:26.387 [2024-07-15 11:35:03.843529] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1e68a60) 00:16:26.387 [2024-07-15 11:35:03.843537] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.387 [2024-07-15 11:35:03.843570] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1eabfc0, cid 5, qid 0 00:16:26.387 [2024-07-15 11:35:03.843634] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:26.387 [2024-07-15 11:35:03.843642] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:26.387 [2024-07-15 11:35:03.843646] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:26.387 [2024-07-15 11:35:03.843650] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1eabfc0) on tqpair=0x1e68a60 00:16:26.387 [2024-07-15 11:35:03.843661] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:26.387 [2024-07-15 11:35:03.843666] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1e68a60) 00:16:26.387 [2024-07-15 11:35:03.843674] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.387 [2024-07-15 11:35:03.843693] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1eabfc0, cid 5, qid 0 00:16:26.387 [2024-07-15 11:35:03.843750] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:26.387 [2024-07-15 11:35:03.843771] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:26.387 [2024-07-15 11:35:03.843777] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:26.387 [2024-07-15 11:35:03.843782] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1eabfc0) on tqpair=0x1e68a60 00:16:26.387 [2024-07-15 11:35:03.843793] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:26.387 [2024-07-15 11:35:03.843798] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1e68a60) 00:16:26.387 [2024-07-15 11:35:03.843806] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.387 [2024-07-15 11:35:03.843826] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1eabfc0, cid 5, qid 0 00:16:26.387 [2024-07-15 11:35:03.843880] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:26.387 [2024-07-15 11:35:03.843887] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:26.387 [2024-07-15 11:35:03.843891] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:26.387 [2024-07-15 11:35:03.843895] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1eabfc0) on tqpair=0x1e68a60 00:16:26.387 [2024-07-15 11:35:03.843916] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:26.387 [2024-07-15 11:35:03.843922] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1e68a60) 00:16:26.387 [2024-07-15 11:35:03.843930] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.387 [2024-07-15 11:35:03.843938] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:26.387 [2024-07-15 11:35:03.843942] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1e68a60) 00:16:26.387 [2024-07-15 11:35:03.843949] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.387 [2024-07-15 11:35:03.843958] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:26.387 [2024-07-15 11:35:03.843962] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x1e68a60) 00:16:26.387 [2024-07-15 11:35:03.843969] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.387 [2024-07-15 11:35:03.843981] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:26.387 [2024-07-15 11:35:03.843985] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x1e68a60) 00:16:26.387 [2024-07-15 11:35:03.843992] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.387 [2024-07-15 11:35:03.844014] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1eabfc0, cid 5, qid 0 00:16:26.387 [2024-07-15 11:35:03.844021] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1eabe40, cid 4, qid 0 00:16:26.387 [2024-07-15 11:35:03.844026] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1eac140, cid 6, qid 0 00:16:26.387 [2024-07-15 11:35:03.844031] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1eac2c0, cid 7, qid 0 00:16:26.387 [2024-07-15 11:35:03.844174] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:16:26.387 [2024-07-15 11:35:03.844189] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:16:26.387 [2024-07-15 11:35:03.844195] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:16:26.387 [2024-07-15 11:35:03.844199] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1e68a60): datao=0, datal=8192, cccid=5 00:16:26.387 [2024-07-15 11:35:03.844204] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1eabfc0) on tqpair(0x1e68a60): expected_datao=0, payload_size=8192 00:16:26.387 [2024-07-15 11:35:03.844209] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:26.387 [2024-07-15 11:35:03.844228] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:16:26.387 [2024-07-15 11:35:03.844234] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:16:26.387 [2024-07-15 11:35:03.844240] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:16:26.387 [2024-07-15 11:35:03.844246] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:16:26.387 [2024-07-15 11:35:03.844250] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:16:26.387 [2024-07-15 11:35:03.844254] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1e68a60): datao=0, datal=512, cccid=4 00:16:26.387 [2024-07-15 11:35:03.844259] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1eabe40) on tqpair(0x1e68a60): expected_datao=0, payload_size=512 00:16:26.387 [2024-07-15 11:35:03.844264] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:26.387 [2024-07-15 11:35:03.844271] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:16:26.387 [2024-07-15 11:35:03.844275] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:16:26.387 [2024-07-15 11:35:03.844281] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:16:26.387 [2024-07-15 11:35:03.844287] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:16:26.387 [2024-07-15 11:35:03.844291] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:16:26.387 [2024-07-15 11:35:03.844295] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1e68a60): datao=0, datal=512, cccid=6 00:16:26.387 [2024-07-15 11:35:03.844300] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1eac140) on tqpair(0x1e68a60): expected_datao=0, payload_size=512 00:16:26.387 [2024-07-15 11:35:03.844305] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:26.387 [2024-07-15 11:35:03.844311] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:16:26.387 [2024-07-15 11:35:03.844315] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:16:26.388 [2024-07-15 11:35:03.844322] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:16:26.388 [2024-07-15 11:35:03.844328] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:16:26.388 [2024-07-15 11:35:03.844332] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:16:26.388 [2024-07-15 11:35:03.844335] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1e68a60): datao=0, datal=4096, cccid=7 00:16:26.388 [2024-07-15 11:35:03.844340] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1eac2c0) on tqpair(0x1e68a60): expected_datao=0, payload_size=4096 00:16:26.388 [2024-07-15 11:35:03.844346] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:26.388 [2024-07-15 11:35:03.844353] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:16:26.388 [2024-07-15 11:35:03.844357] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:16:26.388 [2024-07-15 11:35:03.844363] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:26.388 [2024-07-15 11:35:03.844370] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:26.388 [2024-07-15 11:35:03.844374] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:26.388 [2024-07-15 11:35:03.844378] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1eabfc0) on tqpair=0x1e68a60 00:16:26.388 [2024-07-15 11:35:03.844397] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:26.388 [2024-07-15 11:35:03.844404] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:26.388 [2024-07-15 11:35:03.844408] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:26.388 [2024-07-15 11:35:03.844412] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1eabe40) on tqpair=0x1e68a60 00:16:26.388 [2024-07-15 11:35:03.844425] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:26.388 [2024-07-15 11:35:03.844432] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:26.388 [2024-07-15 11:35:03.844435] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:26.388 [2024-07-15 11:35:03.844440] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1eac140) on tqpair=0x1e68a60 00:16:26.388 [2024-07-15 11:35:03.844448] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:26.388 [2024-07-15 11:35:03.844454] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:26.388 [2024-07-15 11:35:03.844458] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:26.388 [2024-07-15 11:35:03.844462] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1eac2c0) on tqpair=0x1e68a60 00:16:26.388 ===================================================== 00:16:26.388 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:16:26.388 ===================================================== 00:16:26.388 Controller Capabilities/Features 00:16:26.388 ================================ 00:16:26.388 Vendor ID: 8086 00:16:26.388 Subsystem Vendor ID: 8086 00:16:26.388 Serial Number: SPDK00000000000001 00:16:26.388 Model Number: SPDK bdev Controller 00:16:26.388 Firmware Version: 24.09 00:16:26.388 Recommended Arb Burst: 6 00:16:26.388 IEEE OUI Identifier: e4 d2 5c 00:16:26.388 Multi-path I/O 00:16:26.388 May have multiple subsystem ports: Yes 00:16:26.388 May have multiple controllers: Yes 00:16:26.388 Associated with SR-IOV VF: No 00:16:26.388 Max Data Transfer Size: 131072 00:16:26.388 Max Number of Namespaces: 32 00:16:26.388 Max Number of I/O Queues: 127 00:16:26.388 NVMe Specification Version (VS): 1.3 00:16:26.388 NVMe Specification Version (Identify): 1.3 00:16:26.388 Maximum Queue Entries: 128 00:16:26.388 Contiguous Queues Required: Yes 00:16:26.388 Arbitration Mechanisms Supported 00:16:26.388 Weighted Round Robin: Not Supported 00:16:26.388 Vendor Specific: Not Supported 00:16:26.388 Reset Timeout: 15000 ms 00:16:26.388 Doorbell Stride: 4 bytes 00:16:26.388 NVM Subsystem Reset: Not Supported 00:16:26.388 Command Sets Supported 00:16:26.388 NVM Command Set: Supported 00:16:26.388 Boot Partition: Not Supported 00:16:26.388 Memory Page Size Minimum: 4096 bytes 00:16:26.388 Memory Page Size Maximum: 4096 bytes 00:16:26.388 Persistent Memory Region: Not Supported 00:16:26.388 Optional Asynchronous Events Supported 00:16:26.388 Namespace Attribute Notices: Supported 00:16:26.388 Firmware Activation Notices: Not Supported 00:16:26.388 ANA Change Notices: Not Supported 00:16:26.388 PLE Aggregate Log Change Notices: Not Supported 00:16:26.388 LBA Status Info Alert Notices: Not Supported 00:16:26.388 EGE Aggregate Log Change Notices: Not Supported 00:16:26.388 Normal NVM Subsystem Shutdown event: Not Supported 00:16:26.388 Zone Descriptor Change Notices: Not Supported 00:16:26.388 Discovery Log Change Notices: Not Supported 00:16:26.388 Controller Attributes 00:16:26.388 128-bit Host Identifier: Supported 00:16:26.388 Non-Operational Permissive Mode: Not Supported 00:16:26.388 NVM Sets: Not Supported 00:16:26.388 Read Recovery Levels: Not Supported 00:16:26.388 Endurance Groups: Not Supported 00:16:26.388 Predictable Latency Mode: Not Supported 00:16:26.388 Traffic Based Keep ALive: Not Supported 00:16:26.388 Namespace Granularity: Not Supported 00:16:26.388 SQ Associations: Not Supported 00:16:26.388 UUID List: Not Supported 00:16:26.388 Multi-Domain Subsystem: Not Supported 00:16:26.388 Fixed Capacity Management: Not Supported 00:16:26.388 Variable Capacity Management: Not Supported 00:16:26.388 Delete Endurance Group: Not Supported 00:16:26.388 Delete NVM Set: Not Supported 00:16:26.388 Extended LBA Formats Supported: Not Supported 00:16:26.388 Flexible Data Placement Supported: Not Supported 00:16:26.388 00:16:26.388 Controller Memory Buffer Support 00:16:26.388 ================================ 00:16:26.388 Supported: No 00:16:26.388 00:16:26.388 Persistent Memory Region Support 00:16:26.388 ================================ 00:16:26.388 Supported: No 00:16:26.388 00:16:26.388 Admin Command Set Attributes 00:16:26.388 ============================ 00:16:26.388 Security Send/Receive: Not Supported 00:16:26.388 Format NVM: Not Supported 00:16:26.388 Firmware Activate/Download: Not Supported 00:16:26.388 Namespace Management: Not Supported 00:16:26.388 Device Self-Test: Not Supported 00:16:26.388 Directives: Not Supported 00:16:26.388 NVMe-MI: Not Supported 00:16:26.388 Virtualization Management: Not Supported 00:16:26.388 Doorbell Buffer Config: Not Supported 00:16:26.388 Get LBA Status Capability: Not Supported 00:16:26.388 Command & Feature Lockdown Capability: Not Supported 00:16:26.388 Abort Command Limit: 4 00:16:26.388 Async Event Request Limit: 4 00:16:26.388 Number of Firmware Slots: N/A 00:16:26.388 Firmware Slot 1 Read-Only: N/A 00:16:26.388 Firmware Activation Without Reset: N/A 00:16:26.388 Multiple Update Detection Support: N/A 00:16:26.388 Firmware Update Granularity: No Information Provided 00:16:26.388 Per-Namespace SMART Log: No 00:16:26.388 Asymmetric Namespace Access Log Page: Not Supported 00:16:26.388 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:16:26.388 Command Effects Log Page: Supported 00:16:26.388 Get Log Page Extended Data: Supported 00:16:26.388 Telemetry Log Pages: Not Supported 00:16:26.388 Persistent Event Log Pages: Not Supported 00:16:26.388 Supported Log Pages Log Page: May Support 00:16:26.388 Commands Supported & Effects Log Page: Not Supported 00:16:26.388 Feature Identifiers & Effects Log Page:May Support 00:16:26.388 NVMe-MI Commands & Effects Log Page: May Support 00:16:26.388 Data Area 4 for Telemetry Log: Not Supported 00:16:26.388 Error Log Page Entries Supported: 128 00:16:26.388 Keep Alive: Supported 00:16:26.388 Keep Alive Granularity: 10000 ms 00:16:26.388 00:16:26.388 NVM Command Set Attributes 00:16:26.388 ========================== 00:16:26.388 Submission Queue Entry Size 00:16:26.388 Max: 64 00:16:26.388 Min: 64 00:16:26.388 Completion Queue Entry Size 00:16:26.388 Max: 16 00:16:26.388 Min: 16 00:16:26.388 Number of Namespaces: 32 00:16:26.388 Compare Command: Supported 00:16:26.388 Write Uncorrectable Command: Not Supported 00:16:26.388 Dataset Management Command: Supported 00:16:26.388 Write Zeroes Command: Supported 00:16:26.388 Set Features Save Field: Not Supported 00:16:26.388 Reservations: Supported 00:16:26.388 Timestamp: Not Supported 00:16:26.388 Copy: Supported 00:16:26.388 Volatile Write Cache: Present 00:16:26.388 Atomic Write Unit (Normal): 1 00:16:26.388 Atomic Write Unit (PFail): 1 00:16:26.388 Atomic Compare & Write Unit: 1 00:16:26.388 Fused Compare & Write: Supported 00:16:26.388 Scatter-Gather List 00:16:26.388 SGL Command Set: Supported 00:16:26.388 SGL Keyed: Supported 00:16:26.388 SGL Bit Bucket Descriptor: Not Supported 00:16:26.388 SGL Metadata Pointer: Not Supported 00:16:26.388 Oversized SGL: Not Supported 00:16:26.388 SGL Metadata Address: Not Supported 00:16:26.388 SGL Offset: Supported 00:16:26.388 Transport SGL Data Block: Not Supported 00:16:26.388 Replay Protected Memory Block: Not Supported 00:16:26.388 00:16:26.388 Firmware Slot Information 00:16:26.388 ========================= 00:16:26.388 Active slot: 1 00:16:26.388 Slot 1 Firmware Revision: 24.09 00:16:26.388 00:16:26.388 00:16:26.388 Commands Supported and Effects 00:16:26.388 ============================== 00:16:26.388 Admin Commands 00:16:26.388 -------------- 00:16:26.388 Get Log Page (02h): Supported 00:16:26.388 Identify (06h): Supported 00:16:26.388 Abort (08h): Supported 00:16:26.388 Set Features (09h): Supported 00:16:26.388 Get Features (0Ah): Supported 00:16:26.388 Asynchronous Event Request (0Ch): Supported 00:16:26.388 Keep Alive (18h): Supported 00:16:26.388 I/O Commands 00:16:26.388 ------------ 00:16:26.388 Flush (00h): Supported LBA-Change 00:16:26.388 Write (01h): Supported LBA-Change 00:16:26.388 Read (02h): Supported 00:16:26.388 Compare (05h): Supported 00:16:26.388 Write Zeroes (08h): Supported LBA-Change 00:16:26.389 Dataset Management (09h): Supported LBA-Change 00:16:26.389 Copy (19h): Supported LBA-Change 00:16:26.389 00:16:26.389 Error Log 00:16:26.389 ========= 00:16:26.389 00:16:26.389 Arbitration 00:16:26.389 =========== 00:16:26.389 Arbitration Burst: 1 00:16:26.389 00:16:26.389 Power Management 00:16:26.389 ================ 00:16:26.389 Number of Power States: 1 00:16:26.389 Current Power State: Power State #0 00:16:26.389 Power State #0: 00:16:26.389 Max Power: 0.00 W 00:16:26.389 Non-Operational State: Operational 00:16:26.389 Entry Latency: Not Reported 00:16:26.389 Exit Latency: Not Reported 00:16:26.389 Relative Read Throughput: 0 00:16:26.389 Relative Read Latency: 0 00:16:26.389 Relative Write Throughput: 0 00:16:26.389 Relative Write Latency: 0 00:16:26.389 Idle Power: Not Reported 00:16:26.389 Active Power: Not Reported 00:16:26.389 Non-Operational Permissive Mode: Not Supported 00:16:26.389 00:16:26.389 Health Information 00:16:26.389 ================== 00:16:26.389 Critical Warnings: 00:16:26.389 Available Spare Space: OK 00:16:26.389 Temperature: OK 00:16:26.389 Device Reliability: OK 00:16:26.389 Read Only: No 00:16:26.389 Volatile Memory Backup: OK 00:16:26.389 Current Temperature: 0 Kelvin (-273 Celsius) 00:16:26.389 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:16:26.389 Available Spare: 0% 00:16:26.389 Available Spare Threshold: 0% 00:16:26.389 Life Percentage Used:[2024-07-15 11:35:03.848604] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:26.389 [2024-07-15 11:35:03.848619] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x1e68a60) 00:16:26.389 [2024-07-15 11:35:03.848631] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.389 [2024-07-15 11:35:03.848665] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1eac2c0, cid 7, qid 0 00:16:26.389 [2024-07-15 11:35:03.848751] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:26.389 [2024-07-15 11:35:03.848759] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:26.389 [2024-07-15 11:35:03.848763] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:26.389 [2024-07-15 11:35:03.848768] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1eac2c0) on tqpair=0x1e68a60 00:16:26.389 [2024-07-15 11:35:03.848812] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:16:26.389 [2024-07-15 11:35:03.848825] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1eab840) on tqpair=0x1e68a60 00:16:26.389 [2024-07-15 11:35:03.848833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.389 [2024-07-15 11:35:03.848839] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1eab9c0) on tqpair=0x1e68a60 00:16:26.389 [2024-07-15 11:35:03.848844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.389 [2024-07-15 11:35:03.848850] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1eabb40) on tqpair=0x1e68a60 00:16:26.389 [2024-07-15 11:35:03.848855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.389 [2024-07-15 11:35:03.848861] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1eabcc0) on tqpair=0x1e68a60 00:16:26.389 [2024-07-15 11:35:03.848866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:26.389 [2024-07-15 11:35:03.848877] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:26.389 [2024-07-15 11:35:03.848882] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:26.389 [2024-07-15 11:35:03.848886] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e68a60) 00:16:26.389 [2024-07-15 11:35:03.848894] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.389 [2024-07-15 11:35:03.848919] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1eabcc0, cid 3, qid 0 00:16:26.389 [2024-07-15 11:35:03.848973] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:26.389 [2024-07-15 11:35:03.848981] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:26.389 [2024-07-15 11:35:03.848985] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:26.389 [2024-07-15 11:35:03.848989] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1eabcc0) on tqpair=0x1e68a60 00:16:26.389 [2024-07-15 11:35:03.848998] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:26.389 [2024-07-15 11:35:03.849003] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:26.389 [2024-07-15 11:35:03.849007] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e68a60) 00:16:26.389 [2024-07-15 11:35:03.849015] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.389 [2024-07-15 11:35:03.849038] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1eabcc0, cid 3, qid 0 00:16:26.389 [2024-07-15 11:35:03.849116] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:26.389 [2024-07-15 11:35:03.849128] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:26.389 [2024-07-15 11:35:03.849132] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:26.389 [2024-07-15 11:35:03.849137] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1eabcc0) on tqpair=0x1e68a60 00:16:26.389 [2024-07-15 11:35:03.849143] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:16:26.389 [2024-07-15 11:35:03.849148] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:16:26.389 [2024-07-15 11:35:03.849159] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:26.389 [2024-07-15 11:35:03.849164] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:26.389 [2024-07-15 11:35:03.849168] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e68a60) 00:16:26.389 [2024-07-15 11:35:03.849176] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.389 [2024-07-15 11:35:03.849196] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1eabcc0, cid 3, qid 0 00:16:26.389 [2024-07-15 11:35:03.849252] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:26.389 [2024-07-15 11:35:03.849259] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:26.389 [2024-07-15 11:35:03.849263] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:26.389 [2024-07-15 11:35:03.849267] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1eabcc0) on tqpair=0x1e68a60 00:16:26.389 [2024-07-15 11:35:03.849279] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:26.389 [2024-07-15 11:35:03.849284] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:26.389 [2024-07-15 11:35:03.849288] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e68a60) 00:16:26.389 [2024-07-15 11:35:03.849296] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.389 [2024-07-15 11:35:03.849314] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1eabcc0, cid 3, qid 0 00:16:26.389 [2024-07-15 11:35:03.849369] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:26.389 [2024-07-15 11:35:03.849376] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:26.389 [2024-07-15 11:35:03.849380] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:26.389 [2024-07-15 11:35:03.849384] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1eabcc0) on tqpair=0x1e68a60 00:16:26.389 [2024-07-15 11:35:03.849395] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:26.389 [2024-07-15 11:35:03.849400] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:26.389 [2024-07-15 11:35:03.849404] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e68a60) 00:16:26.389 [2024-07-15 11:35:03.849412] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.389 [2024-07-15 11:35:03.849430] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1eabcc0, cid 3, qid 0 00:16:26.389 [2024-07-15 11:35:03.849483] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:26.389 [2024-07-15 11:35:03.849491] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:26.389 [2024-07-15 11:35:03.849495] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:26.389 [2024-07-15 11:35:03.849499] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1eabcc0) on tqpair=0x1e68a60 00:16:26.389 [2024-07-15 11:35:03.849510] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:26.389 [2024-07-15 11:35:03.849515] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:26.389 [2024-07-15 11:35:03.849519] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e68a60) 00:16:26.389 [2024-07-15 11:35:03.849527] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.389 [2024-07-15 11:35:03.849557] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1eabcc0, cid 3, qid 0 00:16:26.389 [2024-07-15 11:35:03.849611] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:26.389 [2024-07-15 11:35:03.849618] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:26.389 [2024-07-15 11:35:03.849622] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:26.389 [2024-07-15 11:35:03.849626] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1eabcc0) on tqpair=0x1e68a60 00:16:26.389 [2024-07-15 11:35:03.849638] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:26.389 [2024-07-15 11:35:03.849643] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:26.389 [2024-07-15 11:35:03.849647] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e68a60) 00:16:26.389 [2024-07-15 11:35:03.849655] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.389 [2024-07-15 11:35:03.849676] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1eabcc0, cid 3, qid 0 00:16:26.389 [2024-07-15 11:35:03.849732] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:26.389 [2024-07-15 11:35:03.849739] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:26.389 [2024-07-15 11:35:03.849743] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:26.389 [2024-07-15 11:35:03.849747] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1eabcc0) on tqpair=0x1e68a60 00:16:26.389 [2024-07-15 11:35:03.849758] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:26.389 [2024-07-15 11:35:03.849763] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:26.389 [2024-07-15 11:35:03.849767] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e68a60) 00:16:26.389 [2024-07-15 11:35:03.849775] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.389 [2024-07-15 11:35:03.849793] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1eabcc0, cid 3, qid 0 00:16:26.389 [2024-07-15 11:35:03.849861] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:26.389 [2024-07-15 11:35:03.849868] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:26.389 [2024-07-15 11:35:03.849872] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:26.389 [2024-07-15 11:35:03.849876] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1eabcc0) on tqpair=0x1e68a60 00:16:26.389 [2024-07-15 11:35:03.849887] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:26.389 [2024-07-15 11:35:03.849892] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:26.389 [2024-07-15 11:35:03.849896] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e68a60) 00:16:26.390 [2024-07-15 11:35:03.849904] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.390 [2024-07-15 11:35:03.849924] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1eabcc0, cid 3, qid 0 00:16:26.390 [2024-07-15 11:35:03.849979] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:26.390 [2024-07-15 11:35:03.849990] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:26.390 [2024-07-15 11:35:03.849995] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:26.390 [2024-07-15 11:35:03.850000] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1eabcc0) on tqpair=0x1e68a60 00:16:26.390 [2024-07-15 11:35:03.850011] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:26.390 [2024-07-15 11:35:03.850016] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:26.390 [2024-07-15 11:35:03.850020] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e68a60) 00:16:26.390 [2024-07-15 11:35:03.850028] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.390 [2024-07-15 11:35:03.850048] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1eabcc0, cid 3, qid 0 00:16:26.390 [2024-07-15 11:35:03.850102] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:26.390 [2024-07-15 11:35:03.850109] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:26.390 [2024-07-15 11:35:03.850113] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:26.390 [2024-07-15 11:35:03.850117] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1eabcc0) on tqpair=0x1e68a60 00:16:26.390 [2024-07-15 11:35:03.850128] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:26.390 [2024-07-15 11:35:03.850133] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:26.390 [2024-07-15 11:35:03.850137] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e68a60) 00:16:26.390 [2024-07-15 11:35:03.850145] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.390 [2024-07-15 11:35:03.850164] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1eabcc0, cid 3, qid 0 00:16:26.390 [2024-07-15 11:35:03.850215] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:26.390 [2024-07-15 11:35:03.850233] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:26.390 [2024-07-15 11:35:03.850239] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:26.390 [2024-07-15 11:35:03.850243] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1eabcc0) on tqpair=0x1e68a60 00:16:26.390 [2024-07-15 11:35:03.850255] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:26.390 [2024-07-15 11:35:03.850260] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:26.390 [2024-07-15 11:35:03.850264] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e68a60) 00:16:26.390 [2024-07-15 11:35:03.850272] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.390 [2024-07-15 11:35:03.850293] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1eabcc0, cid 3, qid 0 00:16:26.390 [2024-07-15 11:35:03.850350] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:26.390 [2024-07-15 11:35:03.850362] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:26.390 [2024-07-15 11:35:03.850366] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:26.390 [2024-07-15 11:35:03.850371] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1eabcc0) on tqpair=0x1e68a60 00:16:26.390 [2024-07-15 11:35:03.850382] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:26.390 [2024-07-15 11:35:03.850387] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:26.390 [2024-07-15 11:35:03.850391] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e68a60) 00:16:26.390 [2024-07-15 11:35:03.850399] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.390 [2024-07-15 11:35:03.850419] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1eabcc0, cid 3, qid 0 00:16:26.390 [2024-07-15 11:35:03.850473] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:26.390 [2024-07-15 11:35:03.850480] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:26.390 [2024-07-15 11:35:03.850484] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:26.390 [2024-07-15 11:35:03.850488] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1eabcc0) on tqpair=0x1e68a60 00:16:26.390 [2024-07-15 11:35:03.850499] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:26.390 [2024-07-15 11:35:03.850504] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:26.390 [2024-07-15 11:35:03.850508] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e68a60) 00:16:26.390 [2024-07-15 11:35:03.850515] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.390 [2024-07-15 11:35:03.850534] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1eabcc0, cid 3, qid 0 00:16:26.390 [2024-07-15 11:35:03.850602] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:26.390 [2024-07-15 11:35:03.850611] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:26.390 [2024-07-15 11:35:03.850615] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:26.390 [2024-07-15 11:35:03.850620] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1eabcc0) on tqpair=0x1e68a60 00:16:26.390 [2024-07-15 11:35:03.850631] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:26.390 [2024-07-15 11:35:03.850636] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:26.390 [2024-07-15 11:35:03.850640] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e68a60) 00:16:26.390 [2024-07-15 11:35:03.850648] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.390 [2024-07-15 11:35:03.850669] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1eabcc0, cid 3, qid 0 00:16:26.390 [2024-07-15 11:35:03.850722] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:26.390 [2024-07-15 11:35:03.850729] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:26.390 [2024-07-15 11:35:03.850732] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:26.390 [2024-07-15 11:35:03.850737] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1eabcc0) on tqpair=0x1e68a60 00:16:26.390 [2024-07-15 11:35:03.850748] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:26.390 [2024-07-15 11:35:03.850753] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:26.390 [2024-07-15 11:35:03.850757] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e68a60) 00:16:26.390 [2024-07-15 11:35:03.850764] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.390 [2024-07-15 11:35:03.850783] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1eabcc0, cid 3, qid 0 00:16:26.390 [2024-07-15 11:35:03.850836] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:26.390 [2024-07-15 11:35:03.850843] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:26.390 [2024-07-15 11:35:03.850847] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:26.390 [2024-07-15 11:35:03.850852] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1eabcc0) on tqpair=0x1e68a60 00:16:26.390 [2024-07-15 11:35:03.850862] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:26.390 [2024-07-15 11:35:03.850867] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:26.390 [2024-07-15 11:35:03.850871] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e68a60) 00:16:26.390 [2024-07-15 11:35:03.850879] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.390 [2024-07-15 11:35:03.850898] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1eabcc0, cid 3, qid 0 00:16:26.390 [2024-07-15 11:35:03.850950] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:26.390 [2024-07-15 11:35:03.850956] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:26.390 [2024-07-15 11:35:03.850960] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:26.390 [2024-07-15 11:35:03.850965] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1eabcc0) on tqpair=0x1e68a60 00:16:26.390 [2024-07-15 11:35:03.850975] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:26.390 [2024-07-15 11:35:03.850980] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:26.390 [2024-07-15 11:35:03.850984] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e68a60) 00:16:26.390 [2024-07-15 11:35:03.850992] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.390 [2024-07-15 11:35:03.851011] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1eabcc0, cid 3, qid 0 00:16:26.390 [2024-07-15 11:35:03.851068] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:26.390 [2024-07-15 11:35:03.851075] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:26.390 [2024-07-15 11:35:03.851078] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:26.390 [2024-07-15 11:35:03.851083] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1eabcc0) on tqpair=0x1e68a60 00:16:26.390 [2024-07-15 11:35:03.851094] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:26.390 [2024-07-15 11:35:03.851099] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:26.390 [2024-07-15 11:35:03.851103] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e68a60) 00:16:26.390 [2024-07-15 11:35:03.851110] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.390 [2024-07-15 11:35:03.851129] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1eabcc0, cid 3, qid 0 00:16:26.390 [2024-07-15 11:35:03.851185] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:26.390 [2024-07-15 11:35:03.851192] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:26.390 [2024-07-15 11:35:03.851196] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:26.390 [2024-07-15 11:35:03.851201] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1eabcc0) on tqpair=0x1e68a60 00:16:26.391 [2024-07-15 11:35:03.851211] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:26.391 [2024-07-15 11:35:03.851216] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:26.391 [2024-07-15 11:35:03.851220] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e68a60) 00:16:26.391 [2024-07-15 11:35:03.851228] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.391 [2024-07-15 11:35:03.851247] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1eabcc0, cid 3, qid 0 00:16:26.391 [2024-07-15 11:35:03.851298] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:26.391 [2024-07-15 11:35:03.851305] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:26.391 [2024-07-15 11:35:03.851309] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:26.391 [2024-07-15 11:35:03.851313] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1eabcc0) on tqpair=0x1e68a60 00:16:26.391 [2024-07-15 11:35:03.851324] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:26.391 [2024-07-15 11:35:03.851329] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:26.391 [2024-07-15 11:35:03.851333] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e68a60) 00:16:26.391 [2024-07-15 11:35:03.851340] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.391 [2024-07-15 11:35:03.851359] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1eabcc0, cid 3, qid 0 00:16:26.391 [2024-07-15 11:35:03.851413] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:26.391 [2024-07-15 11:35:03.851420] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:26.391 [2024-07-15 11:35:03.851424] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:26.391 [2024-07-15 11:35:03.851428] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1eabcc0) on tqpair=0x1e68a60 00:16:26.391 [2024-07-15 11:35:03.851439] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:26.391 [2024-07-15 11:35:03.851444] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:26.391 [2024-07-15 11:35:03.851448] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e68a60) 00:16:26.391 [2024-07-15 11:35:03.851455] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.391 [2024-07-15 11:35:03.851474] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1eabcc0, cid 3, qid 0 00:16:26.391 [2024-07-15 11:35:03.851527] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:26.391 [2024-07-15 11:35:03.851538] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:26.391 [2024-07-15 11:35:03.851543] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:26.391 [2024-07-15 11:35:03.851557] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1eabcc0) on tqpair=0x1e68a60 00:16:26.391 [2024-07-15 11:35:03.851570] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:26.391 [2024-07-15 11:35:03.851575] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:26.391 [2024-07-15 11:35:03.851579] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e68a60) 00:16:26.391 [2024-07-15 11:35:03.851587] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.391 [2024-07-15 11:35:03.851608] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1eabcc0, cid 3, qid 0 00:16:26.391 [2024-07-15 11:35:03.851833] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:26.391 [2024-07-15 11:35:03.851848] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:26.391 [2024-07-15 11:35:03.851854] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:26.391 [2024-07-15 11:35:03.851858] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1eabcc0) on tqpair=0x1e68a60 00:16:26.391 [2024-07-15 11:35:03.851870] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:26.391 [2024-07-15 11:35:03.851875] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:26.391 [2024-07-15 11:35:03.851879] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e68a60) 00:16:26.391 [2024-07-15 11:35:03.851887] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.391 [2024-07-15 11:35:03.851908] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1eabcc0, cid 3, qid 0 00:16:26.391 [2024-07-15 11:35:03.851986] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:26.391 [2024-07-15 11:35:03.851994] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:26.391 [2024-07-15 11:35:03.851998] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:26.391 [2024-07-15 11:35:03.852002] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1eabcc0) on tqpair=0x1e68a60 00:16:26.391 [2024-07-15 11:35:03.852013] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:26.391 [2024-07-15 11:35:03.852018] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:26.391 [2024-07-15 11:35:03.852022] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e68a60) 00:16:26.391 [2024-07-15 11:35:03.852030] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.649 [2024-07-15 11:35:03.852049] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1eabcc0, cid 3, qid 0 00:16:26.649 [2024-07-15 11:35:03.852105] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:26.649 [2024-07-15 11:35:03.852112] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:26.649 [2024-07-15 11:35:03.852116] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:26.649 [2024-07-15 11:35:03.852120] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1eabcc0) on tqpair=0x1e68a60 00:16:26.649 [2024-07-15 11:35:03.852131] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:26.649 [2024-07-15 11:35:03.852135] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:26.649 [2024-07-15 11:35:03.852140] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e68a60) 00:16:26.649 [2024-07-15 11:35:03.852148] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.649 [2024-07-15 11:35:03.852167] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1eabcc0, cid 3, qid 0 00:16:26.649 [2024-07-15 11:35:03.852218] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:26.649 [2024-07-15 11:35:03.852225] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:26.649 [2024-07-15 11:35:03.852229] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:26.649 [2024-07-15 11:35:03.852233] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1eabcc0) on tqpair=0x1e68a60 00:16:26.649 [2024-07-15 11:35:03.852244] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:26.649 [2024-07-15 11:35:03.852249] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:26.649 [2024-07-15 11:35:03.852253] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e68a60) 00:16:26.649 [2024-07-15 11:35:03.852261] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.649 [2024-07-15 11:35:03.852279] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1eabcc0, cid 3, qid 0 00:16:26.649 [2024-07-15 11:35:03.852344] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:26.649 [2024-07-15 11:35:03.852351] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:26.649 [2024-07-15 11:35:03.852355] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:26.649 [2024-07-15 11:35:03.852360] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1eabcc0) on tqpair=0x1e68a60 00:16:26.649 [2024-07-15 11:35:03.852370] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:26.649 [2024-07-15 11:35:03.852375] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:26.649 [2024-07-15 11:35:03.852379] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e68a60) 00:16:26.649 [2024-07-15 11:35:03.852387] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.649 [2024-07-15 11:35:03.852405] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1eabcc0, cid 3, qid 0 00:16:26.649 [2024-07-15 11:35:03.852460] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:26.649 [2024-07-15 11:35:03.852471] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:26.649 [2024-07-15 11:35:03.852476] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:26.649 [2024-07-15 11:35:03.852480] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1eabcc0) on tqpair=0x1e68a60 00:16:26.649 [2024-07-15 11:35:03.852492] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:26.649 [2024-07-15 11:35:03.852497] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:26.649 [2024-07-15 11:35:03.852501] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e68a60) 00:16:26.649 [2024-07-15 11:35:03.852509] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.649 [2024-07-15 11:35:03.852529] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1eabcc0, cid 3, qid 0 00:16:26.649 [2024-07-15 11:35:03.856573] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:26.649 [2024-07-15 11:35:03.856596] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:26.649 [2024-07-15 11:35:03.856602] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:26.649 [2024-07-15 11:35:03.856607] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1eabcc0) on tqpair=0x1e68a60 00:16:26.649 [2024-07-15 11:35:03.856623] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:26.649 [2024-07-15 11:35:03.856629] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:26.649 [2024-07-15 11:35:03.856633] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e68a60) 00:16:26.649 [2024-07-15 11:35:03.856643] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.649 [2024-07-15 11:35:03.856671] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1eabcc0, cid 3, qid 0 00:16:26.649 [2024-07-15 11:35:03.856737] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:26.649 [2024-07-15 11:35:03.856744] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:26.649 [2024-07-15 11:35:03.856748] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:26.649 [2024-07-15 11:35:03.856752] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1eabcc0) on tqpair=0x1e68a60 00:16:26.649 [2024-07-15 11:35:03.856761] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 7 milliseconds 00:16:26.649 0% 00:16:26.649 Data Units Read: 0 00:16:26.649 Data Units Written: 0 00:16:26.649 Host Read Commands: 0 00:16:26.649 Host Write Commands: 0 00:16:26.649 Controller Busy Time: 0 minutes 00:16:26.649 Power Cycles: 0 00:16:26.649 Power On Hours: 0 hours 00:16:26.649 Unsafe Shutdowns: 0 00:16:26.649 Unrecoverable Media Errors: 0 00:16:26.649 Lifetime Error Log Entries: 0 00:16:26.649 Warning Temperature Time: 0 minutes 00:16:26.649 Critical Temperature Time: 0 minutes 00:16:26.649 00:16:26.649 Number of Queues 00:16:26.649 ================ 00:16:26.649 Number of I/O Submission Queues: 127 00:16:26.649 Number of I/O Completion Queues: 127 00:16:26.649 00:16:26.649 Active Namespaces 00:16:26.649 ================= 00:16:26.649 Namespace ID:1 00:16:26.649 Error Recovery Timeout: Unlimited 00:16:26.649 Command Set Identifier: NVM (00h) 00:16:26.649 Deallocate: Supported 00:16:26.649 Deallocated/Unwritten Error: Not Supported 00:16:26.649 Deallocated Read Value: Unknown 00:16:26.650 Deallocate in Write Zeroes: Not Supported 00:16:26.650 Deallocated Guard Field: 0xFFFF 00:16:26.650 Flush: Supported 00:16:26.650 Reservation: Supported 00:16:26.650 Namespace Sharing Capabilities: Multiple Controllers 00:16:26.650 Size (in LBAs): 131072 (0GiB) 00:16:26.650 Capacity (in LBAs): 131072 (0GiB) 00:16:26.650 Utilization (in LBAs): 131072 (0GiB) 00:16:26.650 NGUID: ABCDEF0123456789ABCDEF0123456789 00:16:26.650 EUI64: ABCDEF0123456789 00:16:26.650 UUID: edac342d-f104-47df-9fda-129ab03c934f 00:16:26.650 Thin Provisioning: Not Supported 00:16:26.650 Per-NS Atomic Units: Yes 00:16:26.650 Atomic Boundary Size (Normal): 0 00:16:26.650 Atomic Boundary Size (PFail): 0 00:16:26.650 Atomic Boundary Offset: 0 00:16:26.650 Maximum Single Source Range Length: 65535 00:16:26.650 Maximum Copy Length: 65535 00:16:26.650 Maximum Source Range Count: 1 00:16:26.650 NGUID/EUI64 Never Reused: No 00:16:26.650 Namespace Write Protected: No 00:16:26.650 Number of LBA Formats: 1 00:16:26.650 Current LBA Format: LBA Format #00 00:16:26.650 LBA Format #00: Data Size: 512 Metadata Size: 0 00:16:26.650 00:16:26.650 11:35:03 nvmf_tcp.nvmf_identify -- host/identify.sh@51 -- # sync 00:16:26.650 11:35:03 nvmf_tcp.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:26.650 11:35:03 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:26.650 11:35:03 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:16:26.650 11:35:03 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:26.650 11:35:03 nvmf_tcp.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:16:26.650 11:35:03 nvmf_tcp.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:16:26.650 11:35:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:26.650 11:35:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@117 -- # sync 00:16:26.650 11:35:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:26.650 11:35:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@120 -- # set +e 00:16:26.650 11:35:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:26.650 11:35:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:26.650 rmmod nvme_tcp 00:16:26.650 rmmod nvme_fabrics 00:16:26.650 rmmod nvme_keyring 00:16:26.650 11:35:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:26.650 11:35:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@124 -- # set -e 00:16:26.650 11:35:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@125 -- # return 0 00:16:26.650 11:35:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@489 -- # '[' -n 86760 ']' 00:16:26.650 11:35:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@490 -- # killprocess 86760 00:16:26.650 11:35:03 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@948 -- # '[' -z 86760 ']' 00:16:26.650 11:35:03 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@952 -- # kill -0 86760 00:16:26.650 11:35:03 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@953 -- # uname 00:16:26.650 11:35:03 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:26.650 11:35:03 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 86760 00:16:26.650 11:35:04 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:16:26.650 11:35:04 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:16:26.650 killing process with pid 86760 00:16:26.650 11:35:04 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@966 -- # echo 'killing process with pid 86760' 00:16:26.650 11:35:04 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@967 -- # kill 86760 00:16:26.650 11:35:04 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@972 -- # wait 86760 00:16:26.909 11:35:04 nvmf_tcp.nvmf_identify -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:26.909 11:35:04 nvmf_tcp.nvmf_identify -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:26.909 11:35:04 nvmf_tcp.nvmf_identify -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:26.909 11:35:04 nvmf_tcp.nvmf_identify -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:26.909 11:35:04 nvmf_tcp.nvmf_identify -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:26.909 11:35:04 nvmf_tcp.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:26.909 11:35:04 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:26.909 11:35:04 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:26.909 11:35:04 nvmf_tcp.nvmf_identify -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:16:26.909 00:16:26.909 real 0m2.671s 00:16:26.909 user 0m7.715s 00:16:26.909 sys 0m0.588s 00:16:26.909 11:35:04 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:26.909 11:35:04 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:16:26.909 ************************************ 00:16:26.909 END TEST nvmf_identify 00:16:26.909 ************************************ 00:16:26.909 11:35:04 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:16:26.909 11:35:04 nvmf_tcp -- nvmf/nvmf.sh@98 -- # run_test nvmf_perf /home/vagrant/spdk_repo/spdk/test/nvmf/host/perf.sh --transport=tcp 00:16:26.909 11:35:04 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:16:26.909 11:35:04 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:26.909 11:35:04 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:26.909 ************************************ 00:16:26.909 START TEST nvmf_perf 00:16:26.909 ************************************ 00:16:26.909 11:35:04 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/perf.sh --transport=tcp 00:16:26.909 * Looking for test storage... 00:16:26.909 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:16:26.909 11:35:04 nvmf_tcp.nvmf_perf -- host/perf.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:26.909 11:35:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:16:26.909 11:35:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:26.909 11:35:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:26.909 11:35:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:26.909 11:35:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:26.909 11:35:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:26.909 11:35:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:26.909 11:35:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:26.909 11:35:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:26.909 11:35:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:26.909 11:35:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:26.909 11:35:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:891080d4-f96c-4735-b9e2-e3ce9892e421 00:16:26.909 11:35:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=891080d4-f96c-4735-b9e2-e3ce9892e421 00:16:26.909 11:35:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:26.909 11:35:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:26.909 11:35:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:26.909 11:35:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:26.909 11:35:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:26.909 11:35:04 nvmf_tcp.nvmf_perf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:26.909 11:35:04 nvmf_tcp.nvmf_perf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:26.909 11:35:04 nvmf_tcp.nvmf_perf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:26.909 11:35:04 nvmf_tcp.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:26.909 11:35:04 nvmf_tcp.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:26.909 11:35:04 nvmf_tcp.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:26.909 11:35:04 nvmf_tcp.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:16:26.909 11:35:04 nvmf_tcp.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:26.909 11:35:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@47 -- # : 0 00:16:26.909 11:35:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:26.909 11:35:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:26.909 11:35:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:26.909 11:35:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:26.909 11:35:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:26.909 11:35:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:26.909 11:35:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:26.909 11:35:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:26.909 11:35:04 nvmf_tcp.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:16:26.909 11:35:04 nvmf_tcp.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:16:26.909 11:35:04 nvmf_tcp.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:26.909 11:35:04 nvmf_tcp.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:16:26.909 11:35:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:26.909 11:35:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:26.909 11:35:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:26.909 11:35:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:26.909 11:35:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:26.909 11:35:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:26.909 11:35:04 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:26.909 11:35:04 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:26.909 11:35:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:16:26.909 11:35:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:16:26.909 11:35:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:16:26.909 11:35:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:16:26.909 11:35:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:16:26.909 11:35:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@432 -- # nvmf_veth_init 00:16:26.909 11:35:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:26.909 11:35:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:26.909 11:35:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:16:26.909 11:35:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:16:26.909 11:35:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:26.909 11:35:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:26.909 11:35:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:26.909 11:35:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:26.909 11:35:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:26.909 11:35:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:26.909 11:35:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:26.909 11:35:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:26.909 11:35:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:16:27.167 11:35:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:16:27.167 Cannot find device "nvmf_tgt_br" 00:16:27.167 11:35:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@155 -- # true 00:16:27.167 11:35:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:16:27.167 Cannot find device "nvmf_tgt_br2" 00:16:27.167 11:35:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@156 -- # true 00:16:27.167 11:35:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:16:27.167 11:35:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:16:27.167 Cannot find device "nvmf_tgt_br" 00:16:27.167 11:35:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@158 -- # true 00:16:27.167 11:35:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:16:27.167 Cannot find device "nvmf_tgt_br2" 00:16:27.167 11:35:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@159 -- # true 00:16:27.167 11:35:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:16:27.167 11:35:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:16:27.167 11:35:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:27.167 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:27.167 11:35:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@162 -- # true 00:16:27.167 11:35:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:27.167 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:27.167 11:35:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@163 -- # true 00:16:27.167 11:35:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:16:27.167 11:35:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:27.167 11:35:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:27.167 11:35:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:27.167 11:35:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:27.167 11:35:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:27.167 11:35:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:27.167 11:35:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:16:27.167 11:35:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:16:27.167 11:35:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:16:27.167 11:35:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:16:27.167 11:35:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:16:27.167 11:35:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:16:27.167 11:35:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:27.167 11:35:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:27.424 11:35:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:27.424 11:35:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:16:27.424 11:35:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:16:27.424 11:35:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:16:27.424 11:35:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:27.424 11:35:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:27.424 11:35:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:27.424 11:35:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:27.424 11:35:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:16:27.424 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:27.424 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.088 ms 00:16:27.424 00:16:27.424 --- 10.0.0.2 ping statistics --- 00:16:27.424 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:27.424 rtt min/avg/max/mdev = 0.088/0.088/0.088/0.000 ms 00:16:27.424 11:35:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:16:27.424 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:27.424 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.056 ms 00:16:27.424 00:16:27.424 --- 10.0.0.3 ping statistics --- 00:16:27.424 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:27.424 rtt min/avg/max/mdev = 0.056/0.056/0.056/0.000 ms 00:16:27.424 11:35:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:27.424 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:27.424 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.032 ms 00:16:27.424 00:16:27.424 --- 10.0.0.1 ping statistics --- 00:16:27.424 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:27.424 rtt min/avg/max/mdev = 0.032/0.032/0.032/0.000 ms 00:16:27.424 11:35:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:27.424 11:35:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@433 -- # return 0 00:16:27.424 11:35:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:27.424 11:35:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:27.424 11:35:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:27.424 11:35:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:27.424 11:35:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:27.424 11:35:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:27.424 11:35:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:27.424 11:35:04 nvmf_tcp.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:16:27.424 11:35:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:27.424 11:35:04 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:27.424 11:35:04 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:16:27.424 11:35:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@481 -- # nvmfpid=86985 00:16:27.424 11:35:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:16:27.424 11:35:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@482 -- # waitforlisten 86985 00:16:27.424 11:35:04 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@829 -- # '[' -z 86985 ']' 00:16:27.424 11:35:04 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:27.424 11:35:04 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:27.424 11:35:04 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:27.424 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:27.424 11:35:04 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:27.424 11:35:04 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:16:27.424 [2024-07-15 11:35:04.808938] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:16:27.424 [2024-07-15 11:35:04.809042] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:27.681 [2024-07-15 11:35:04.941887] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:27.681 [2024-07-15 11:35:05.008234] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:27.681 [2024-07-15 11:35:05.008291] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:27.681 [2024-07-15 11:35:05.008303] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:27.681 [2024-07-15 11:35:05.008311] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:27.681 [2024-07-15 11:35:05.008318] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:27.681 [2024-07-15 11:35:05.008474] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:16:27.681 [2024-07-15 11:35:05.008717] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:16:27.681 [2024-07-15 11:35:05.009113] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:16:27.681 [2024-07-15 11:35:05.009125] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:28.614 11:35:05 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:28.614 11:35:05 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@862 -- # return 0 00:16:28.614 11:35:05 nvmf_tcp.nvmf_perf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:28.614 11:35:05 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:28.614 11:35:05 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:16:28.614 11:35:05 nvmf_tcp.nvmf_perf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:28.614 11:35:05 nvmf_tcp.nvmf_perf -- host/perf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:16:28.614 11:35:05 nvmf_tcp.nvmf_perf -- host/perf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_subsystem_config 00:16:28.872 11:35:06 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_get_config bdev 00:16:28.872 11:35:06 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:16:29.437 11:35:06 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:00:10.0 00:16:29.437 11:35:06 nvmf_tcp.nvmf_perf -- host/perf.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:29.694 11:35:06 nvmf_tcp.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:16:29.694 11:35:06 nvmf_tcp.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:00:10.0 ']' 00:16:29.694 11:35:06 nvmf_tcp.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:16:29.694 11:35:06 nvmf_tcp.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:16:29.694 11:35:06 nvmf_tcp.nvmf_perf -- host/perf.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:16:29.952 [2024-07-15 11:35:07.353493] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:29.952 11:35:07 nvmf_tcp.nvmf_perf -- host/perf.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:16:30.514 11:35:07 nvmf_tcp.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:16:30.514 11:35:07 nvmf_tcp.nvmf_perf -- host/perf.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:16:30.514 11:35:07 nvmf_tcp.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:16:30.514 11:35:07 nvmf_tcp.nvmf_perf -- host/perf.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:16:31.080 11:35:08 nvmf_tcp.nvmf_perf -- host/perf.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:31.338 [2024-07-15 11:35:08.603982] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:31.338 11:35:08 nvmf_tcp.nvmf_perf -- host/perf.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:16:31.596 11:35:08 nvmf_tcp.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:00:10.0 ']' 00:16:31.596 11:35:08 nvmf_tcp.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:00:10.0' 00:16:31.596 11:35:08 nvmf_tcp.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:16:31.596 11:35:08 nvmf_tcp.nvmf_perf -- host/perf.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:00:10.0' 00:16:32.530 Initializing NVMe Controllers 00:16:32.530 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:16:32.530 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:16:32.530 Initialization complete. Launching workers. 00:16:32.530 ======================================================== 00:16:32.530 Latency(us) 00:16:32.530 Device Information : IOPS MiB/s Average min max 00:16:32.530 PCIE (0000:00:10.0) NSID 1 from core 0: 24960.90 97.50 1281.51 284.19 6662.48 00:16:32.530 ======================================================== 00:16:32.530 Total : 24960.90 97.50 1281.51 284.19 6662.48 00:16:32.530 00:16:32.530 11:35:09 nvmf_tcp.nvmf_perf -- host/perf.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:16:33.902 Initializing NVMe Controllers 00:16:33.902 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:16:33.902 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:16:33.902 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:16:33.902 Initialization complete. Launching workers. 00:16:33.902 ======================================================== 00:16:33.902 Latency(us) 00:16:33.902 Device Information : IOPS MiB/s Average min max 00:16:33.902 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 3380.37 13.20 294.31 117.92 4275.97 00:16:33.902 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 123.50 0.48 8160.35 7939.61 12020.59 00:16:33.902 ======================================================== 00:16:33.902 Total : 3503.87 13.69 571.57 117.92 12020.59 00:16:33.902 00:16:34.159 11:35:11 nvmf_tcp.nvmf_perf -- host/perf.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:16:35.534 Initializing NVMe Controllers 00:16:35.534 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:16:35.534 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:16:35.534 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:16:35.534 Initialization complete. Launching workers. 00:16:35.534 ======================================================== 00:16:35.534 Latency(us) 00:16:35.534 Device Information : IOPS MiB/s Average min max 00:16:35.534 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 8256.93 32.25 3879.34 732.80 10999.73 00:16:35.534 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 2707.01 10.57 11914.91 4837.58 20229.69 00:16:35.534 ======================================================== 00:16:35.534 Total : 10963.94 42.83 5863.34 732.80 20229.69 00:16:35.534 00:16:35.534 11:35:12 nvmf_tcp.nvmf_perf -- host/perf.sh@59 -- # [[ '' == \e\8\1\0 ]] 00:16:35.534 11:35:12 nvmf_tcp.nvmf_perf -- host/perf.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:16:38.061 Initializing NVMe Controllers 00:16:38.061 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:16:38.061 Controller IO queue size 128, less than required. 00:16:38.061 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:16:38.061 Controller IO queue size 128, less than required. 00:16:38.061 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:16:38.061 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:16:38.061 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:16:38.061 Initialization complete. Launching workers. 00:16:38.061 ======================================================== 00:16:38.061 Latency(us) 00:16:38.061 Device Information : IOPS MiB/s Average min max 00:16:38.061 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1680.63 420.16 76874.36 43456.38 124029.71 00:16:38.061 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 446.50 111.63 336568.77 95823.34 670661.61 00:16:38.061 ======================================================== 00:16:38.061 Total : 2127.13 531.78 131386.41 43456.38 670661.61 00:16:38.061 00:16:38.061 11:35:15 nvmf_tcp.nvmf_perf -- host/perf.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:16:38.320 Initializing NVMe Controllers 00:16:38.320 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:16:38.320 Controller IO queue size 128, less than required. 00:16:38.320 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:16:38.320 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:16:38.320 Controller IO queue size 128, less than required. 00:16:38.320 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:16:38.320 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 4096. Removing this ns from test 00:16:38.320 WARNING: Some requested NVMe devices were skipped 00:16:38.320 No valid NVMe controllers or AIO or URING devices found 00:16:38.320 11:35:15 nvmf_tcp.nvmf_perf -- host/perf.sh@65 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:16:40.850 Initializing NVMe Controllers 00:16:40.850 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:16:40.850 Controller IO queue size 128, less than required. 00:16:40.850 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:16:40.850 Controller IO queue size 128, less than required. 00:16:40.850 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:16:40.850 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:16:40.850 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:16:40.850 Initialization complete. Launching workers. 00:16:40.850 00:16:40.850 ==================== 00:16:40.850 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:16:40.850 TCP transport: 00:16:40.850 polls: 10092 00:16:40.850 idle_polls: 4672 00:16:40.850 sock_completions: 5420 00:16:40.850 nvme_completions: 3217 00:16:40.850 submitted_requests: 4750 00:16:40.850 queued_requests: 1 00:16:40.850 00:16:40.850 ==================== 00:16:40.850 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:16:40.850 TCP transport: 00:16:40.850 polls: 12079 00:16:40.850 idle_polls: 7958 00:16:40.850 sock_completions: 4121 00:16:40.850 nvme_completions: 7551 00:16:40.850 submitted_requests: 11270 00:16:40.850 queued_requests: 1 00:16:40.850 ======================================================== 00:16:40.850 Latency(us) 00:16:40.850 Device Information : IOPS MiB/s Average min max 00:16:40.850 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 803.91 200.98 163647.30 95829.22 351060.13 00:16:40.850 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1887.30 471.82 68131.80 34293.59 116260.57 00:16:40.850 ======================================================== 00:16:40.850 Total : 2691.21 672.80 96664.02 34293.59 351060.13 00:16:40.850 00:16:40.850 11:35:18 nvmf_tcp.nvmf_perf -- host/perf.sh@66 -- # sync 00:16:41.108 11:35:18 nvmf_tcp.nvmf_perf -- host/perf.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:41.108 11:35:18 nvmf_tcp.nvmf_perf -- host/perf.sh@69 -- # '[' 0 -eq 1 ']' 00:16:41.108 11:35:18 nvmf_tcp.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:16:41.108 11:35:18 nvmf_tcp.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:16:41.108 11:35:18 nvmf_tcp.nvmf_perf -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:41.108 11:35:18 nvmf_tcp.nvmf_perf -- nvmf/common.sh@117 -- # sync 00:16:41.366 11:35:18 nvmf_tcp.nvmf_perf -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:41.366 11:35:18 nvmf_tcp.nvmf_perf -- nvmf/common.sh@120 -- # set +e 00:16:41.366 11:35:18 nvmf_tcp.nvmf_perf -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:41.366 11:35:18 nvmf_tcp.nvmf_perf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:41.366 rmmod nvme_tcp 00:16:41.366 rmmod nvme_fabrics 00:16:41.366 rmmod nvme_keyring 00:16:41.366 11:35:18 nvmf_tcp.nvmf_perf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:41.366 11:35:18 nvmf_tcp.nvmf_perf -- nvmf/common.sh@124 -- # set -e 00:16:41.366 11:35:18 nvmf_tcp.nvmf_perf -- nvmf/common.sh@125 -- # return 0 00:16:41.366 11:35:18 nvmf_tcp.nvmf_perf -- nvmf/common.sh@489 -- # '[' -n 86985 ']' 00:16:41.366 11:35:18 nvmf_tcp.nvmf_perf -- nvmf/common.sh@490 -- # killprocess 86985 00:16:41.366 11:35:18 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@948 -- # '[' -z 86985 ']' 00:16:41.366 11:35:18 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@952 -- # kill -0 86985 00:16:41.366 11:35:18 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@953 -- # uname 00:16:41.366 11:35:18 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:41.366 11:35:18 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 86985 00:16:41.366 killing process with pid 86985 00:16:41.366 11:35:18 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:16:41.366 11:35:18 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:16:41.366 11:35:18 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@966 -- # echo 'killing process with pid 86985' 00:16:41.366 11:35:18 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@967 -- # kill 86985 00:16:41.366 11:35:18 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@972 -- # wait 86985 00:16:41.931 11:35:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:41.931 11:35:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:41.931 11:35:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:41.931 11:35:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:41.931 11:35:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:41.931 11:35:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:41.931 11:35:19 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:41.931 11:35:19 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:41.931 11:35:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:16:41.931 00:16:41.931 real 0m15.074s 00:16:41.931 user 0m55.686s 00:16:41.931 sys 0m3.572s 00:16:41.931 11:35:19 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:41.931 11:35:19 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:16:41.931 ************************************ 00:16:41.931 END TEST nvmf_perf 00:16:41.931 ************************************ 00:16:41.931 11:35:19 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:16:41.931 11:35:19 nvmf_tcp -- nvmf/nvmf.sh@99 -- # run_test nvmf_fio_host /home/vagrant/spdk_repo/spdk/test/nvmf/host/fio.sh --transport=tcp 00:16:41.931 11:35:19 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:16:41.931 11:35:19 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:41.931 11:35:19 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:41.931 ************************************ 00:16:41.931 START TEST nvmf_fio_host 00:16:41.931 ************************************ 00:16:41.931 11:35:19 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/fio.sh --transport=tcp 00:16:42.190 * Looking for test storage... 00:16:42.190 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:16:42.190 11:35:19 nvmf_tcp.nvmf_fio_host -- host/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:42.190 11:35:19 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:42.190 11:35:19 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:42.190 11:35:19 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:42.190 11:35:19 nvmf_tcp.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:42.190 11:35:19 nvmf_tcp.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:42.190 11:35:19 nvmf_tcp.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:42.190 11:35:19 nvmf_tcp.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:16:42.190 11:35:19 nvmf_tcp.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:42.190 11:35:19 nvmf_tcp.nvmf_fio_host -- host/fio.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:42.190 11:35:19 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:16:42.190 11:35:19 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:42.190 11:35:19 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:42.190 11:35:19 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:42.190 11:35:19 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:42.190 11:35:19 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:42.190 11:35:19 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:42.190 11:35:19 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:42.190 11:35:19 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:42.190 11:35:19 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:42.190 11:35:19 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:42.190 11:35:19 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:891080d4-f96c-4735-b9e2-e3ce9892e421 00:16:42.190 11:35:19 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=891080d4-f96c-4735-b9e2-e3ce9892e421 00:16:42.190 11:35:19 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:42.190 11:35:19 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:42.190 11:35:19 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:42.190 11:35:19 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:42.190 11:35:19 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:42.190 11:35:19 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:42.190 11:35:19 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:42.190 11:35:19 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:42.190 11:35:19 nvmf_tcp.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:42.190 11:35:19 nvmf_tcp.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:42.190 11:35:19 nvmf_tcp.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:42.190 11:35:19 nvmf_tcp.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:16:42.190 11:35:19 nvmf_tcp.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:42.190 11:35:19 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@47 -- # : 0 00:16:42.190 11:35:19 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:42.190 11:35:19 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:42.190 11:35:19 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:42.190 11:35:19 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:42.190 11:35:19 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:42.190 11:35:19 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:42.190 11:35:19 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:42.190 11:35:19 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:42.190 11:35:19 nvmf_tcp.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:42.190 11:35:19 nvmf_tcp.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:16:42.190 11:35:19 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:42.190 11:35:19 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:42.190 11:35:19 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:42.190 11:35:19 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:42.190 11:35:19 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:42.190 11:35:19 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:42.190 11:35:19 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:42.190 11:35:19 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:42.190 11:35:19 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:16:42.190 11:35:19 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:16:42.190 11:35:19 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:16:42.190 11:35:19 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:16:42.190 11:35:19 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:16:42.190 11:35:19 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@432 -- # nvmf_veth_init 00:16:42.190 11:35:19 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:42.190 11:35:19 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:42.190 11:35:19 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:16:42.190 11:35:19 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:16:42.190 11:35:19 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:42.190 11:35:19 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:42.190 11:35:19 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:42.191 11:35:19 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:42.191 11:35:19 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:42.191 11:35:19 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:42.191 11:35:19 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:42.191 11:35:19 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:42.191 11:35:19 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:16:42.191 11:35:19 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:16:42.191 Cannot find device "nvmf_tgt_br" 00:16:42.191 11:35:19 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@155 -- # true 00:16:42.191 11:35:19 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:16:42.191 Cannot find device "nvmf_tgt_br2" 00:16:42.191 11:35:19 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@156 -- # true 00:16:42.191 11:35:19 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:16:42.191 11:35:19 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:16:42.191 Cannot find device "nvmf_tgt_br" 00:16:42.191 11:35:19 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@158 -- # true 00:16:42.191 11:35:19 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:16:42.191 Cannot find device "nvmf_tgt_br2" 00:16:42.191 11:35:19 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@159 -- # true 00:16:42.191 11:35:19 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:16:42.191 11:35:19 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:16:42.191 11:35:19 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:42.191 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:42.191 11:35:19 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@162 -- # true 00:16:42.191 11:35:19 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:42.191 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:42.191 11:35:19 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@163 -- # true 00:16:42.191 11:35:19 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:16:42.449 11:35:19 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:42.449 11:35:19 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:42.449 11:35:19 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:42.449 11:35:19 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:42.449 11:35:19 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:42.449 11:35:19 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:42.449 11:35:19 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:16:42.449 11:35:19 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:16:42.449 11:35:19 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:16:42.449 11:35:19 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:16:42.449 11:35:19 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:16:42.449 11:35:19 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:16:42.449 11:35:19 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:42.449 11:35:19 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:42.449 11:35:19 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:42.449 11:35:19 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:16:42.449 11:35:19 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:16:42.449 11:35:19 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:16:42.449 11:35:19 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:42.449 11:35:19 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:42.449 11:35:19 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:42.449 11:35:19 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:42.449 11:35:19 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:16:42.449 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:42.449 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.082 ms 00:16:42.449 00:16:42.449 --- 10.0.0.2 ping statistics --- 00:16:42.449 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:42.449 rtt min/avg/max/mdev = 0.082/0.082/0.082/0.000 ms 00:16:42.449 11:35:19 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:16:42.449 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:42.449 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.046 ms 00:16:42.449 00:16:42.449 --- 10.0.0.3 ping statistics --- 00:16:42.449 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:42.449 rtt min/avg/max/mdev = 0.046/0.046/0.046/0.000 ms 00:16:42.449 11:35:19 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:42.449 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:42.449 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.019 ms 00:16:42.449 00:16:42.449 --- 10.0.0.1 ping statistics --- 00:16:42.449 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:42.449 rtt min/avg/max/mdev = 0.019/0.019/0.019/0.000 ms 00:16:42.449 11:35:19 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:42.449 11:35:19 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@433 -- # return 0 00:16:42.449 11:35:19 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:42.449 11:35:19 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:42.449 11:35:19 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:42.449 11:35:19 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:42.449 11:35:19 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:42.449 11:35:19 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:42.449 11:35:19 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:42.449 11:35:19 nvmf_tcp.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:16:42.449 11:35:19 nvmf_tcp.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:16:42.449 11:35:19 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:42.449 11:35:19 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:16:42.449 11:35:19 nvmf_tcp.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=87462 00:16:42.449 11:35:19 nvmf_tcp.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:16:42.449 11:35:19 nvmf_tcp.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:16:42.449 11:35:19 nvmf_tcp.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 87462 00:16:42.449 11:35:19 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@829 -- # '[' -z 87462 ']' 00:16:42.449 11:35:19 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:42.449 11:35:19 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:42.449 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:42.449 11:35:19 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:42.449 11:35:19 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:42.449 11:35:19 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:16:42.729 [2024-07-15 11:35:19.930877] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:16:42.729 [2024-07-15 11:35:19.930995] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:42.729 [2024-07-15 11:35:20.065908] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:42.729 [2024-07-15 11:35:20.125283] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:42.729 [2024-07-15 11:35:20.125338] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:42.729 [2024-07-15 11:35:20.125371] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:42.729 [2024-07-15 11:35:20.125384] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:42.729 [2024-07-15 11:35:20.125394] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:42.729 [2024-07-15 11:35:20.125590] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:16:42.729 [2024-07-15 11:35:20.125711] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:16:42.729 [2024-07-15 11:35:20.126112] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:16:42.729 [2024-07-15 11:35:20.126124] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:43.682 11:35:20 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:43.682 11:35:20 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@862 -- # return 0 00:16:43.682 11:35:20 nvmf_tcp.nvmf_fio_host -- host/fio.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:16:43.939 [2024-07-15 11:35:21.201608] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:43.939 11:35:21 nvmf_tcp.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:16:43.939 11:35:21 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:43.939 11:35:21 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:16:43.939 11:35:21 nvmf_tcp.nvmf_fio_host -- host/fio.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:16:44.196 Malloc1 00:16:44.196 11:35:21 nvmf_tcp.nvmf_fio_host -- host/fio.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:16:44.453 11:35:21 nvmf_tcp.nvmf_fio_host -- host/fio.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:16:44.709 11:35:22 nvmf_tcp.nvmf_fio_host -- host/fio.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:45.272 [2024-07-15 11:35:22.472053] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:45.272 11:35:22 nvmf_tcp.nvmf_fio_host -- host/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:16:45.530 11:35:22 nvmf_tcp.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:16:45.530 11:35:22 nvmf_tcp.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:16:45.530 11:35:22 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:16:45.530 11:35:22 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:16:45.530 11:35:22 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:16:45.530 11:35:22 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:16:45.530 11:35:22 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:16:45.530 11:35:22 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:16:45.530 11:35:22 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:16:45.530 11:35:22 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:16:45.530 11:35:22 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:16:45.530 11:35:22 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:16:45.530 11:35:22 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:16:45.530 11:35:22 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:16:45.530 11:35:22 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:16:45.530 11:35:22 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:16:45.530 11:35:22 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:16:45.530 11:35:22 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:16:45.530 11:35:22 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:16:45.530 11:35:22 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:16:45.530 11:35:22 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:16:45.531 11:35:22 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:16:45.531 11:35:22 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:16:45.788 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:16:45.788 fio-3.35 00:16:45.788 Starting 1 thread 00:16:48.314 00:16:48.314 test: (groupid=0, jobs=1): err= 0: pid=87598: Mon Jul 15 11:35:25 2024 00:16:48.314 read: IOPS=6993, BW=27.3MiB/s (28.6MB/s)(54.9MiB/2010msec) 00:16:48.314 slat (usec): min=2, max=241, avg= 3.46, stdev= 2.61 00:16:48.314 clat (usec): min=2410, max=18684, avg=9645.81, stdev=1126.52 00:16:48.314 lat (usec): min=2442, max=18687, avg=9649.27, stdev=1126.40 00:16:48.314 clat percentiles (usec): 00:16:48.314 | 1.00th=[ 7177], 5.00th=[ 7635], 10.00th=[ 8094], 20.00th=[ 8848], 00:16:48.314 | 30.00th=[ 9241], 40.00th=[ 9503], 50.00th=[ 9765], 60.00th=[ 9896], 00:16:48.314 | 70.00th=[10159], 80.00th=[10421], 90.00th=[10945], 95.00th=[11338], 00:16:48.314 | 99.00th=[11994], 99.50th=[12649], 99.90th=[16909], 99.95th=[17433], 00:16:48.314 | 99.99th=[18482] 00:16:48.314 bw ( KiB/s): min=26648, max=30112, per=99.94%, avg=27954.25, stdev=1509.07, samples=4 00:16:48.314 iops : min= 6662, max= 7528, avg=6988.50, stdev=377.30, samples=4 00:16:48.314 write: IOPS=6998, BW=27.3MiB/s (28.7MB/s)(54.9MiB/2010msec); 0 zone resets 00:16:48.314 slat (usec): min=2, max=188, avg= 3.63, stdev= 2.14 00:16:48.314 clat (usec): min=1428, max=17414, avg=8604.62, stdev=1000.73 00:16:48.314 lat (usec): min=1437, max=17417, avg=8608.25, stdev=1000.68 00:16:48.314 clat percentiles (usec): 00:16:48.314 | 1.00th=[ 6259], 5.00th=[ 6915], 10.00th=[ 7242], 20.00th=[ 7898], 00:16:48.314 | 30.00th=[ 8225], 40.00th=[ 8455], 50.00th=[ 8717], 60.00th=[ 8848], 00:16:48.314 | 70.00th=[ 9110], 80.00th=[ 9372], 90.00th=[ 9634], 95.00th=[ 9896], 00:16:48.314 | 99.00th=[10683], 99.50th=[10945], 99.90th=[15270], 99.95th=[16712], 00:16:48.314 | 99.99th=[17433] 00:16:48.314 bw ( KiB/s): min=26754, max=31072, per=100.00%, avg=28008.50, stdev=2050.49, samples=4 00:16:48.314 iops : min= 6688, max= 7768, avg=7002.00, stdev=512.72, samples=4 00:16:48.314 lat (msec) : 2=0.04%, 4=0.10%, 10=79.09%, 20=20.77% 00:16:48.314 cpu : usr=58.79%, sys=28.82%, ctx=8, majf=0, minf=7 00:16:48.314 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:16:48.314 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:48.314 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:48.314 issued rwts: total=14056,14066,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:48.314 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:48.314 00:16:48.314 Run status group 0 (all jobs): 00:16:48.314 READ: bw=27.3MiB/s (28.6MB/s), 27.3MiB/s-27.3MiB/s (28.6MB/s-28.6MB/s), io=54.9MiB (57.6MB), run=2010-2010msec 00:16:48.314 WRITE: bw=27.3MiB/s (28.7MB/s), 27.3MiB/s-27.3MiB/s (28.7MB/s-28.7MB/s), io=54.9MiB (57.6MB), run=2010-2010msec 00:16:48.314 11:35:25 nvmf_tcp.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:16:48.314 11:35:25 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:16:48.314 11:35:25 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:16:48.314 11:35:25 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:16:48.314 11:35:25 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:16:48.314 11:35:25 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:16:48.314 11:35:25 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:16:48.314 11:35:25 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:16:48.314 11:35:25 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:16:48.314 11:35:25 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:16:48.314 11:35:25 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:16:48.314 11:35:25 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:16:48.314 11:35:25 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:16:48.314 11:35:25 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:16:48.314 11:35:25 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:16:48.314 11:35:25 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:16:48.314 11:35:25 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:16:48.314 11:35:25 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:16:48.314 11:35:25 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:16:48.314 11:35:25 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:16:48.314 11:35:25 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:16:48.314 11:35:25 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:16:48.314 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:16:48.314 fio-3.35 00:16:48.314 Starting 1 thread 00:16:50.843 00:16:50.843 test: (groupid=0, jobs=1): err= 0: pid=87642: Mon Jul 15 11:35:27 2024 00:16:50.843 read: IOPS=2859, BW=44.7MiB/s (46.9MB/s)(90.1MiB/2017msec) 00:16:50.843 slat (usec): min=3, max=168, avg= 5.56, stdev= 2.95 00:16:50.843 clat (usec): min=9426, max=42906, avg=24389.87, stdev=5070.40 00:16:50.843 lat (usec): min=9431, max=42910, avg=24395.43, stdev=5070.27 00:16:50.843 clat percentiles (usec): 00:16:50.843 | 1.00th=[14877], 5.00th=[17695], 10.00th=[19006], 20.00th=[20579], 00:16:50.843 | 30.00th=[21627], 40.00th=[22676], 50.00th=[23462], 60.00th=[24773], 00:16:50.843 | 70.00th=[25822], 80.00th=[27395], 90.00th=[30802], 95.00th=[34866], 00:16:50.843 | 99.00th=[40633], 99.50th=[41681], 99.90th=[42730], 99.95th=[42730], 00:16:50.843 | 99.99th=[42730] 00:16:50.843 bw ( KiB/s): min=17024, max=27296, per=53.15%, avg=24320.00, stdev=4911.42, samples=4 00:16:50.843 iops : min= 1064, max= 1706, avg=1520.00, stdev=306.96, samples=4 00:16:50.843 write: IOPS=1711, BW=26.7MiB/s (28.0MB/s)(49.5MiB/1851msec); 0 zone resets 00:16:50.843 slat (usec): min=37, max=260, avg=46.39, stdev= 7.89 00:16:50.843 clat (usec): min=20003, max=54730, avg=36080.25, stdev=4405.78 00:16:50.843 lat (usec): min=20067, max=54785, avg=36126.64, stdev=4405.45 00:16:50.843 clat percentiles (usec): 00:16:50.843 | 1.00th=[25035], 5.00th=[27919], 10.00th=[30278], 20.00th=[32900], 00:16:50.843 | 30.00th=[34341], 40.00th=[35390], 50.00th=[36439], 60.00th=[36963], 00:16:50.843 | 70.00th=[38536], 80.00th=[39584], 90.00th=[41681], 95.00th=[42730], 00:16:50.843 | 99.00th=[45876], 99.50th=[47449], 99.90th=[52167], 99.95th=[54264], 00:16:50.843 | 99.99th=[54789] 00:16:50.843 bw ( KiB/s): min=19232, max=29024, per=92.55%, avg=25344.00, stdev=4304.21, samples=4 00:16:50.843 iops : min= 1202, max= 1814, avg=1584.00, stdev=269.01, samples=4 00:16:50.843 lat (msec) : 10=0.08%, 20=9.62%, 50=90.23%, 100=0.07% 00:16:50.843 cpu : usr=76.09%, sys=20.68%, ctx=14, majf=0, minf=20 00:16:50.843 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6% 00:16:50.843 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:50.843 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:50.843 issued rwts: total=5768,3168,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:50.843 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:50.843 00:16:50.843 Run status group 0 (all jobs): 00:16:50.843 READ: bw=44.7MiB/s (46.9MB/s), 44.7MiB/s-44.7MiB/s (46.9MB/s-46.9MB/s), io=90.1MiB (94.5MB), run=2017-2017msec 00:16:50.843 WRITE: bw=26.7MiB/s (28.0MB/s), 26.7MiB/s-26.7MiB/s (28.0MB/s-28.0MB/s), io=49.5MiB (51.9MB), run=1851-1851msec 00:16:50.843 11:35:27 nvmf_tcp.nvmf_fio_host -- host/fio.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:50.843 11:35:28 nvmf_tcp.nvmf_fio_host -- host/fio.sh@49 -- # '[' 0 -eq 1 ']' 00:16:50.843 11:35:28 nvmf_tcp.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:16:50.843 11:35:28 nvmf_tcp.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:16:51.102 11:35:28 nvmf_tcp.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:16:51.102 11:35:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:51.102 11:35:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@117 -- # sync 00:16:51.102 11:35:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:51.102 11:35:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@120 -- # set +e 00:16:51.102 11:35:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:51.102 11:35:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:51.102 rmmod nvme_tcp 00:16:51.102 rmmod nvme_fabrics 00:16:51.102 rmmod nvme_keyring 00:16:51.102 11:35:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:51.102 11:35:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@124 -- # set -e 00:16:51.102 11:35:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@125 -- # return 0 00:16:51.102 11:35:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@489 -- # '[' -n 87462 ']' 00:16:51.102 11:35:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@490 -- # killprocess 87462 00:16:51.102 11:35:28 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@948 -- # '[' -z 87462 ']' 00:16:51.102 11:35:28 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@952 -- # kill -0 87462 00:16:51.102 11:35:28 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@953 -- # uname 00:16:51.102 11:35:28 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:51.102 11:35:28 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 87462 00:16:51.102 killing process with pid 87462 00:16:51.102 11:35:28 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:16:51.102 11:35:28 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:16:51.102 11:35:28 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@966 -- # echo 'killing process with pid 87462' 00:16:51.102 11:35:28 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@967 -- # kill 87462 00:16:51.102 11:35:28 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@972 -- # wait 87462 00:16:51.361 11:35:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:51.361 11:35:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:51.361 11:35:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:51.361 11:35:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:51.361 11:35:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:51.361 11:35:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:51.361 11:35:28 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:51.361 11:35:28 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:51.361 11:35:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:16:51.361 ************************************ 00:16:51.361 END TEST nvmf_fio_host 00:16:51.361 ************************************ 00:16:51.361 00:16:51.361 real 0m9.285s 00:16:51.361 user 0m38.298s 00:16:51.361 sys 0m2.474s 00:16:51.361 11:35:28 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:51.361 11:35:28 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:16:51.361 11:35:28 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:16:51.361 11:35:28 nvmf_tcp -- nvmf/nvmf.sh@100 -- # run_test nvmf_failover /home/vagrant/spdk_repo/spdk/test/nvmf/host/failover.sh --transport=tcp 00:16:51.361 11:35:28 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:16:51.361 11:35:28 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:51.361 11:35:28 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:51.361 ************************************ 00:16:51.361 START TEST nvmf_failover 00:16:51.361 ************************************ 00:16:51.361 11:35:28 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/failover.sh --transport=tcp 00:16:51.361 * Looking for test storage... 00:16:51.361 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:16:51.361 11:35:28 nvmf_tcp.nvmf_failover -- host/failover.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:51.361 11:35:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:16:51.361 11:35:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:51.361 11:35:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:51.361 11:35:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:51.361 11:35:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:51.361 11:35:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:51.361 11:35:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:51.361 11:35:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:51.361 11:35:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:51.361 11:35:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:51.361 11:35:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:51.361 11:35:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:891080d4-f96c-4735-b9e2-e3ce9892e421 00:16:51.361 11:35:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=891080d4-f96c-4735-b9e2-e3ce9892e421 00:16:51.361 11:35:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:51.361 11:35:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:51.361 11:35:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:51.361 11:35:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:51.361 11:35:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:51.361 11:35:28 nvmf_tcp.nvmf_failover -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:51.361 11:35:28 nvmf_tcp.nvmf_failover -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:51.361 11:35:28 nvmf_tcp.nvmf_failover -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:51.361 11:35:28 nvmf_tcp.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:51.361 11:35:28 nvmf_tcp.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:51.361 11:35:28 nvmf_tcp.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:51.361 11:35:28 nvmf_tcp.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:16:51.361 11:35:28 nvmf_tcp.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:51.361 11:35:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@47 -- # : 0 00:16:51.361 11:35:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:51.361 11:35:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:51.361 11:35:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:51.361 11:35:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:51.361 11:35:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:51.361 11:35:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:51.361 11:35:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:51.361 11:35:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:51.361 11:35:28 nvmf_tcp.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:51.361 11:35:28 nvmf_tcp.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:51.361 11:35:28 nvmf_tcp.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:51.361 11:35:28 nvmf_tcp.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:16:51.361 11:35:28 nvmf_tcp.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:16:51.361 11:35:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:51.361 11:35:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:51.361 11:35:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:51.361 11:35:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:51.361 11:35:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:51.361 11:35:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:51.362 11:35:28 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:51.362 11:35:28 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:51.362 11:35:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:16:51.362 11:35:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:16:51.362 11:35:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:16:51.362 11:35:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:16:51.362 11:35:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:16:51.362 11:35:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@432 -- # nvmf_veth_init 00:16:51.362 11:35:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:51.362 11:35:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:51.362 11:35:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:16:51.362 11:35:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:16:51.362 11:35:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:51.362 11:35:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:51.362 11:35:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:51.362 11:35:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:51.362 11:35:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:51.362 11:35:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:51.362 11:35:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:51.362 11:35:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:51.362 11:35:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:16:51.620 11:35:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:16:51.620 Cannot find device "nvmf_tgt_br" 00:16:51.620 11:35:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@155 -- # true 00:16:51.620 11:35:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:16:51.620 Cannot find device "nvmf_tgt_br2" 00:16:51.620 11:35:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@156 -- # true 00:16:51.620 11:35:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:16:51.620 11:35:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:16:51.620 Cannot find device "nvmf_tgt_br" 00:16:51.620 11:35:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@158 -- # true 00:16:51.620 11:35:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:16:51.620 Cannot find device "nvmf_tgt_br2" 00:16:51.620 11:35:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@159 -- # true 00:16:51.620 11:35:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:16:51.620 11:35:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:16:51.620 11:35:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:51.620 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:51.620 11:35:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@162 -- # true 00:16:51.620 11:35:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:51.620 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:51.620 11:35:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@163 -- # true 00:16:51.620 11:35:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:16:51.620 11:35:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:51.620 11:35:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:51.620 11:35:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:51.620 11:35:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:51.620 11:35:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:51.620 11:35:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:51.620 11:35:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:16:51.620 11:35:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:16:51.620 11:35:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:16:51.620 11:35:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:16:51.620 11:35:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:16:51.620 11:35:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:16:51.620 11:35:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:51.620 11:35:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:51.620 11:35:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:51.878 11:35:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:16:51.878 11:35:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:16:51.878 11:35:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:16:51.878 11:35:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:51.878 11:35:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:51.878 11:35:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:51.878 11:35:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:51.878 11:35:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:16:51.878 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:51.878 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.131 ms 00:16:51.878 00:16:51.878 --- 10.0.0.2 ping statistics --- 00:16:51.878 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:51.878 rtt min/avg/max/mdev = 0.131/0.131/0.131/0.000 ms 00:16:51.878 11:35:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:16:51.878 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:51.878 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.054 ms 00:16:51.878 00:16:51.878 --- 10.0.0.3 ping statistics --- 00:16:51.878 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:51.878 rtt min/avg/max/mdev = 0.054/0.054/0.054/0.000 ms 00:16:51.878 11:35:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:51.878 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:51.878 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.030 ms 00:16:51.878 00:16:51.878 --- 10.0.0.1 ping statistics --- 00:16:51.878 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:51.878 rtt min/avg/max/mdev = 0.030/0.030/0.030/0.000 ms 00:16:51.878 11:35:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:51.878 11:35:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@433 -- # return 0 00:16:51.878 11:35:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:51.878 11:35:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:51.878 11:35:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:51.878 11:35:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:51.878 11:35:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:51.878 11:35:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:51.878 11:35:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:51.878 11:35:29 nvmf_tcp.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:16:51.878 11:35:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:51.878 11:35:29 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:51.878 11:35:29 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:16:51.878 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:51.878 11:35:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@481 -- # nvmfpid=87861 00:16:51.878 11:35:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:16:51.878 11:35:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@482 -- # waitforlisten 87861 00:16:51.878 11:35:29 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@829 -- # '[' -z 87861 ']' 00:16:51.878 11:35:29 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:51.878 11:35:29 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:51.878 11:35:29 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:51.878 11:35:29 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:51.878 11:35:29 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:16:51.878 [2024-07-15 11:35:29.284810] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:16:51.878 [2024-07-15 11:35:29.284946] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:52.136 [2024-07-15 11:35:29.431962] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:16:52.136 [2024-07-15 11:35:29.500886] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:52.136 [2024-07-15 11:35:29.501191] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:52.136 [2024-07-15 11:35:29.501384] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:52.137 [2024-07-15 11:35:29.501539] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:52.137 [2024-07-15 11:35:29.501703] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:52.137 [2024-07-15 11:35:29.501865] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:16:52.137 [2024-07-15 11:35:29.502039] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:16:52.137 [2024-07-15 11:35:29.502050] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:16:53.069 11:35:30 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:53.069 11:35:30 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@862 -- # return 0 00:16:53.069 11:35:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:53.069 11:35:30 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:53.069 11:35:30 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:16:53.069 11:35:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:53.069 11:35:30 nvmf_tcp.nvmf_failover -- host/failover.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:16:53.327 [2024-07-15 11:35:30.766369] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:53.327 11:35:30 nvmf_tcp.nvmf_failover -- host/failover.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:16:53.584 Malloc0 00:16:53.841 11:35:31 nvmf_tcp.nvmf_failover -- host/failover.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:16:54.098 11:35:31 nvmf_tcp.nvmf_failover -- host/failover.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:16:54.355 11:35:31 nvmf_tcp.nvmf_failover -- host/failover.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:54.611 [2024-07-15 11:35:31.971820] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:54.611 11:35:31 nvmf_tcp.nvmf_failover -- host/failover.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:16:54.882 [2024-07-15 11:35:32.239987] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:16:54.882 11:35:32 nvmf_tcp.nvmf_failover -- host/failover.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:16:55.143 [2024-07-15 11:35:32.492205] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:16:55.143 11:35:32 nvmf_tcp.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=87979 00:16:55.143 11:35:32 nvmf_tcp.nvmf_failover -- host/failover.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:16:55.143 11:35:32 nvmf_tcp.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:16:55.143 11:35:32 nvmf_tcp.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 87979 /var/tmp/bdevperf.sock 00:16:55.143 11:35:32 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@829 -- # '[' -z 87979 ']' 00:16:55.143 11:35:32 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:55.143 11:35:32 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:55.143 11:35:32 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:55.143 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:55.143 11:35:32 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:55.143 11:35:32 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:16:56.515 11:35:33 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:56.515 11:35:33 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@862 -- # return 0 00:16:56.515 11:35:33 nvmf_tcp.nvmf_failover -- host/failover.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:16:56.772 NVMe0n1 00:16:56.772 11:35:34 nvmf_tcp.nvmf_failover -- host/failover.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:16:57.347 00:16:57.348 11:35:34 nvmf_tcp.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=88032 00:16:57.348 11:35:34 nvmf_tcp.nvmf_failover -- host/failover.sh@38 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:16:57.348 11:35:34 nvmf_tcp.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:16:58.279 11:35:35 nvmf_tcp.nvmf_failover -- host/failover.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:58.537 [2024-07-15 11:35:35.950940] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d9f80 is same with the state(5) to be set 00:16:58.537 [2024-07-15 11:35:35.951005] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d9f80 is same with the state(5) to be set 00:16:58.537 [2024-07-15 11:35:35.951017] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d9f80 is same with the state(5) to be set 00:16:58.537 [2024-07-15 11:35:35.951026] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d9f80 is same with the state(5) to be set 00:16:58.537 [2024-07-15 11:35:35.951034] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d9f80 is same with the state(5) to be set 00:16:58.537 [2024-07-15 11:35:35.951045] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d9f80 is same with the state(5) to be set 00:16:58.537 [2024-07-15 11:35:35.951053] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d9f80 is same with the state(5) to be set 00:16:58.537 [2024-07-15 11:35:35.951061] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d9f80 is same with the state(5) to be set 00:16:58.537 [2024-07-15 11:35:35.951070] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d9f80 is same with the state(5) to be set 00:16:58.537 [2024-07-15 11:35:35.951078] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d9f80 is same with the state(5) to be set 00:16:58.537 [2024-07-15 11:35:35.951086] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d9f80 is same with the state(5) to be set 00:16:58.537 [2024-07-15 11:35:35.951095] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d9f80 is same with the state(5) to be set 00:16:58.537 [2024-07-15 11:35:35.951103] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d9f80 is same with the state(5) to be set 00:16:58.537 [2024-07-15 11:35:35.951111] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d9f80 is same with the state(5) to be set 00:16:58.537 [2024-07-15 11:35:35.951119] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d9f80 is same with the state(5) to be set 00:16:58.537 [2024-07-15 11:35:35.951127] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d9f80 is same with the state(5) to be set 00:16:58.537 [2024-07-15 11:35:35.951136] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d9f80 is same with the state(5) to be set 00:16:58.537 [2024-07-15 11:35:35.951144] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d9f80 is same with the state(5) to be set 00:16:58.537 [2024-07-15 11:35:35.951152] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d9f80 is same with the state(5) to be set 00:16:58.537 [2024-07-15 11:35:35.951160] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d9f80 is same with the state(5) to be set 00:16:58.537 [2024-07-15 11:35:35.951169] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d9f80 is same with the state(5) to be set 00:16:58.537 [2024-07-15 11:35:35.951177] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d9f80 is same with the state(5) to be set 00:16:58.537 [2024-07-15 11:35:35.951185] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d9f80 is same with the state(5) to be set 00:16:58.537 [2024-07-15 11:35:35.951193] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d9f80 is same with the state(5) to be set 00:16:58.537 [2024-07-15 11:35:35.951203] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d9f80 is same with the state(5) to be set 00:16:58.537 [2024-07-15 11:35:35.951211] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d9f80 is same with the state(5) to be set 00:16:58.537 [2024-07-15 11:35:35.951220] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d9f80 is same with the state(5) to be set 00:16:58.537 [2024-07-15 11:35:35.951229] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d9f80 is same with the state(5) to be set 00:16:58.537 [2024-07-15 11:35:35.951237] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d9f80 is same with the state(5) to be set 00:16:58.537 [2024-07-15 11:35:35.951245] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d9f80 is same with the state(5) to be set 00:16:58.537 [2024-07-15 11:35:35.951254] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d9f80 is same with the state(5) to be set 00:16:58.537 [2024-07-15 11:35:35.951262] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d9f80 is same with the state(5) to be set 00:16:58.537 [2024-07-15 11:35:35.951271] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d9f80 is same with the state(5) to be set 00:16:58.537 [2024-07-15 11:35:35.951279] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d9f80 is same with the state(5) to be set 00:16:58.537 [2024-07-15 11:35:35.951287] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d9f80 is same with the state(5) to be set 00:16:58.537 [2024-07-15 11:35:35.951296] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d9f80 is same with the state(5) to be set 00:16:58.537 [2024-07-15 11:35:35.951304] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d9f80 is same with the state(5) to be set 00:16:58.537 [2024-07-15 11:35:35.951312] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d9f80 is same with the state(5) to be set 00:16:58.537 [2024-07-15 11:35:35.951320] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d9f80 is same with the state(5) to be set 00:16:58.537 [2024-07-15 11:35:35.951330] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d9f80 is same with the state(5) to be set 00:16:58.537 [2024-07-15 11:35:35.951338] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d9f80 is same with the state(5) to be set 00:16:58.537 [2024-07-15 11:35:35.951347] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d9f80 is same with the state(5) to be set 00:16:58.537 [2024-07-15 11:35:35.951356] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d9f80 is same with the state(5) to be set 00:16:58.537 [2024-07-15 11:35:35.951364] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d9f80 is same with the state(5) to be set 00:16:58.537 [2024-07-15 11:35:35.951372] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d9f80 is same with the state(5) to be set 00:16:58.537 [2024-07-15 11:35:35.951380] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d9f80 is same with the state(5) to be set 00:16:58.537 [2024-07-15 11:35:35.951388] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d9f80 is same with the state(5) to be set 00:16:58.537 [2024-07-15 11:35:35.951396] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d9f80 is same with the state(5) to be set 00:16:58.537 [2024-07-15 11:35:35.951404] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d9f80 is same with the state(5) to be set 00:16:58.537 [2024-07-15 11:35:35.951413] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d9f80 is same with the state(5) to be set 00:16:58.537 [2024-07-15 11:35:35.951421] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d9f80 is same with the state(5) to be set 00:16:58.537 [2024-07-15 11:35:35.951429] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d9f80 is same with the state(5) to be set 00:16:58.537 [2024-07-15 11:35:35.951437] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d9f80 is same with the state(5) to be set 00:16:58.537 [2024-07-15 11:35:35.951445] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d9f80 is same with the state(5) to be set 00:16:58.537 [2024-07-15 11:35:35.951453] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d9f80 is same with the state(5) to be set 00:16:58.537 [2024-07-15 11:35:35.951461] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d9f80 is same with the state(5) to be set 00:16:58.537 [2024-07-15 11:35:35.951483] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d9f80 is same with the state(5) to be set 00:16:58.537 [2024-07-15 11:35:35.951493] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d9f80 is same with the state(5) to be set 00:16:58.537 [2024-07-15 11:35:35.951501] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d9f80 is same with the state(5) to be set 00:16:58.537 [2024-07-15 11:35:35.951510] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d9f80 is same with the state(5) to be set 00:16:58.538 [2024-07-15 11:35:35.951518] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d9f80 is same with the state(5) to be set 00:16:58.538 [2024-07-15 11:35:35.951526] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d9f80 is same with the state(5) to be set 00:16:58.538 [2024-07-15 11:35:35.951534] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d9f80 is same with the state(5) to be set 00:16:58.538 [2024-07-15 11:35:35.951542] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d9f80 is same with the state(5) to be set 00:16:58.538 [2024-07-15 11:35:35.951567] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d9f80 is same with the state(5) to be set 00:16:58.538 [2024-07-15 11:35:35.951576] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d9f80 is same with the state(5) to be set 00:16:58.538 [2024-07-15 11:35:35.951584] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d9f80 is same with the state(5) to be set 00:16:58.538 [2024-07-15 11:35:35.951592] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d9f80 is same with the state(5) to be set 00:16:58.538 [2024-07-15 11:35:35.951601] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d9f80 is same with the state(5) to be set 00:16:58.538 [2024-07-15 11:35:35.951610] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d9f80 is same with the state(5) to be set 00:16:58.538 [2024-07-15 11:35:35.951618] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d9f80 is same with the state(5) to be set 00:16:58.538 [2024-07-15 11:35:35.951626] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d9f80 is same with the state(5) to be set 00:16:58.538 [2024-07-15 11:35:35.951635] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d9f80 is same with the state(5) to be set 00:16:58.538 [2024-07-15 11:35:35.951643] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d9f80 is same with the state(5) to be set 00:16:58.538 [2024-07-15 11:35:35.951651] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d9f80 is same with the state(5) to be set 00:16:58.538 [2024-07-15 11:35:35.951659] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d9f80 is same with the state(5) to be set 00:16:58.538 [2024-07-15 11:35:35.951667] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d9f80 is same with the state(5) to be set 00:16:58.538 [2024-07-15 11:35:35.951675] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d9f80 is same with the state(5) to be set 00:16:58.538 [2024-07-15 11:35:35.951683] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d9f80 is same with the state(5) to be set 00:16:58.538 [2024-07-15 11:35:35.951691] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d9f80 is same with the state(5) to be set 00:16:58.538 [2024-07-15 11:35:35.951699] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d9f80 is same with the state(5) to be set 00:16:58.538 [2024-07-15 11:35:35.951707] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d9f80 is same with the state(5) to be set 00:16:58.538 [2024-07-15 11:35:35.951715] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d9f80 is same with the state(5) to be set 00:16:58.538 [2024-07-15 11:35:35.951723] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d9f80 is same with the state(5) to be set 00:16:58.538 [2024-07-15 11:35:35.951731] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d9f80 is same with the state(5) to be set 00:16:58.538 [2024-07-15 11:35:35.951739] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d9f80 is same with the state(5) to be set 00:16:58.538 [2024-07-15 11:35:35.951747] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d9f80 is same with the state(5) to be set 00:16:58.538 [2024-07-15 11:35:35.951756] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d9f80 is same with the state(5) to be set 00:16:58.538 [2024-07-15 11:35:35.951764] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d9f80 is same with the state(5) to be set 00:16:58.538 [2024-07-15 11:35:35.951772] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d9f80 is same with the state(5) to be set 00:16:58.538 [2024-07-15 11:35:35.951780] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d9f80 is same with the state(5) to be set 00:16:58.538 [2024-07-15 11:35:35.951788] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d9f80 is same with the state(5) to be set 00:16:58.538 [2024-07-15 11:35:35.951796] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d9f80 is same with the state(5) to be set 00:16:58.538 [2024-07-15 11:35:35.951804] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d9f80 is same with the state(5) to be set 00:16:58.538 [2024-07-15 11:35:35.951813] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d9f80 is same with the state(5) to be set 00:16:58.538 [2024-07-15 11:35:35.951821] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d9f80 is same with the state(5) to be set 00:16:58.538 [2024-07-15 11:35:35.951829] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d9f80 is same with the state(5) to be set 00:16:58.538 [2024-07-15 11:35:35.951837] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d9f80 is same with the state(5) to be set 00:16:58.538 [2024-07-15 11:35:35.951845] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d9f80 is same with the state(5) to be set 00:16:58.538 [2024-07-15 11:35:35.951854] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d9f80 is same with the state(5) to be set 00:16:58.538 [2024-07-15 11:35:35.951862] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d9f80 is same with the state(5) to be set 00:16:58.538 [2024-07-15 11:35:35.951871] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d9f80 is same with the state(5) to be set 00:16:58.538 [2024-07-15 11:35:35.951879] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d9f80 is same with the state(5) to be set 00:16:58.538 [2024-07-15 11:35:35.951888] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d9f80 is same with the state(5) to be set 00:16:58.538 [2024-07-15 11:35:35.951896] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d9f80 is same with the state(5) to be set 00:16:58.538 [2024-07-15 11:35:35.951905] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d9f80 is same with the state(5) to be set 00:16:58.538 [2024-07-15 11:35:35.951914] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d9f80 is same with the state(5) to be set 00:16:58.538 [2024-07-15 11:35:35.951922] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d9f80 is same with the state(5) to be set 00:16:58.538 [2024-07-15 11:35:35.951930] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d9f80 is same with the state(5) to be set 00:16:58.538 [2024-07-15 11:35:35.951938] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d9f80 is same with the state(5) to be set 00:16:58.538 [2024-07-15 11:35:35.951946] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d9f80 is same with the state(5) to be set 00:16:58.538 [2024-07-15 11:35:35.951954] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d9f80 is same with the state(5) to be set 00:16:58.538 [2024-07-15 11:35:35.951962] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d9f80 is same with the state(5) to be set 00:16:58.538 [2024-07-15 11:35:35.951970] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d9f80 is same with the state(5) to be set 00:16:58.538 [2024-07-15 11:35:35.951978] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d9f80 is same with the state(5) to be set 00:16:58.538 [2024-07-15 11:35:35.951987] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d9f80 is same with the state(5) to be set 00:16:58.538 [2024-07-15 11:35:35.951995] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d9f80 is same with the state(5) to be set 00:16:58.538 [2024-07-15 11:35:35.952004] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d9f80 is same with the state(5) to be set 00:16:58.538 [2024-07-15 11:35:35.952012] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d9f80 is same with the state(5) to be set 00:16:58.538 [2024-07-15 11:35:35.952020] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d9f80 is same with the state(5) to be set 00:16:58.538 11:35:35 nvmf_tcp.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:17:01.815 11:35:38 nvmf_tcp.nvmf_failover -- host/failover.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:17:02.072 00:17:02.072 11:35:39 nvmf_tcp.nvmf_failover -- host/failover.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:17:02.330 [2024-07-15 11:35:39.775664] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23db340 is same with the state(5) to be set 00:17:02.330 [2024-07-15 11:35:39.775718] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23db340 is same with the state(5) to be set 00:17:02.330 [2024-07-15 11:35:39.775730] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23db340 is same with the state(5) to be set 00:17:02.330 [2024-07-15 11:35:39.775739] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23db340 is same with the state(5) to be set 00:17:02.330 [2024-07-15 11:35:39.775748] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23db340 is same with the state(5) to be set 00:17:02.330 [2024-07-15 11:35:39.775756] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23db340 is same with the state(5) to be set 00:17:02.330 [2024-07-15 11:35:39.775765] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23db340 is same with the state(5) to be set 00:17:02.330 [2024-07-15 11:35:39.775774] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23db340 is same with the state(5) to be set 00:17:02.330 [2024-07-15 11:35:39.775783] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23db340 is same with the state(5) to be set 00:17:02.330 [2024-07-15 11:35:39.775791] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23db340 is same with the state(5) to be set 00:17:02.330 [2024-07-15 11:35:39.775799] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23db340 is same with the state(5) to be set 00:17:02.330 [2024-07-15 11:35:39.775807] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23db340 is same with the state(5) to be set 00:17:02.330 [2024-07-15 11:35:39.775816] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23db340 is same with the state(5) to be set 00:17:02.330 [2024-07-15 11:35:39.775825] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23db340 is same with the state(5) to be set 00:17:02.330 [2024-07-15 11:35:39.775833] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23db340 is same with the state(5) to be set 00:17:02.330 [2024-07-15 11:35:39.775841] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23db340 is same with the state(5) to be set 00:17:02.330 [2024-07-15 11:35:39.775849] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23db340 is same with the state(5) to be set 00:17:02.330 [2024-07-15 11:35:39.775857] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23db340 is same with the state(5) to be set 00:17:02.330 [2024-07-15 11:35:39.775865] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23db340 is same with the state(5) to be set 00:17:02.330 [2024-07-15 11:35:39.775874] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23db340 is same with the state(5) to be set 00:17:02.330 [2024-07-15 11:35:39.775882] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23db340 is same with the state(5) to be set 00:17:02.330 [2024-07-15 11:35:39.775890] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23db340 is same with the state(5) to be set 00:17:02.330 [2024-07-15 11:35:39.775898] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23db340 is same with the state(5) to be set 00:17:02.330 [2024-07-15 11:35:39.775907] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23db340 is same with the state(5) to be set 00:17:02.330 [2024-07-15 11:35:39.775915] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23db340 is same with the state(5) to be set 00:17:02.330 [2024-07-15 11:35:39.775923] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23db340 is same with the state(5) to be set 00:17:02.330 [2024-07-15 11:35:39.775931] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23db340 is same with the state(5) to be set 00:17:02.330 [2024-07-15 11:35:39.775939] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23db340 is same with the state(5) to be set 00:17:02.330 [2024-07-15 11:35:39.775948] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23db340 is same with the state(5) to be set 00:17:02.330 [2024-07-15 11:35:39.775956] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23db340 is same with the state(5) to be set 00:17:02.330 [2024-07-15 11:35:39.775964] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23db340 is same with the state(5) to be set 00:17:02.330 [2024-07-15 11:35:39.775972] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23db340 is same with the state(5) to be set 00:17:02.330 [2024-07-15 11:35:39.775980] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23db340 is same with the state(5) to be set 00:17:02.330 [2024-07-15 11:35:39.775988] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23db340 is same with the state(5) to be set 00:17:02.330 [2024-07-15 11:35:39.776006] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23db340 is same with the state(5) to be set 00:17:02.330 [2024-07-15 11:35:39.776014] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23db340 is same with the state(5) to be set 00:17:02.330 [2024-07-15 11:35:39.776022] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23db340 is same with the state(5) to be set 00:17:02.330 [2024-07-15 11:35:39.776030] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23db340 is same with the state(5) to be set 00:17:02.330 [2024-07-15 11:35:39.776038] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23db340 is same with the state(5) to be set 00:17:02.330 [2024-07-15 11:35:39.776046] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23db340 is same with the state(5) to be set 00:17:02.330 [2024-07-15 11:35:39.776054] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23db340 is same with the state(5) to be set 00:17:02.330 [2024-07-15 11:35:39.776062] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23db340 is same with the state(5) to be set 00:17:02.330 [2024-07-15 11:35:39.776070] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23db340 is same with the state(5) to be set 00:17:02.330 [2024-07-15 11:35:39.776079] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23db340 is same with the state(5) to be set 00:17:02.330 [2024-07-15 11:35:39.776087] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23db340 is same with the state(5) to be set 00:17:02.330 [2024-07-15 11:35:39.776095] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23db340 is same with the state(5) to be set 00:17:02.330 [2024-07-15 11:35:39.776103] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23db340 is same with the state(5) to be set 00:17:02.331 [2024-07-15 11:35:39.776111] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23db340 is same with the state(5) to be set 00:17:02.331 [2024-07-15 11:35:39.776120] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23db340 is same with the state(5) to be set 00:17:02.331 [2024-07-15 11:35:39.776128] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23db340 is same with the state(5) to be set 00:17:02.331 [2024-07-15 11:35:39.776136] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23db340 is same with the state(5) to be set 00:17:02.331 [2024-07-15 11:35:39.776144] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23db340 is same with the state(5) to be set 00:17:02.331 [2024-07-15 11:35:39.776152] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23db340 is same with the state(5) to be set 00:17:02.331 [2024-07-15 11:35:39.776160] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23db340 is same with the state(5) to be set 00:17:02.331 [2024-07-15 11:35:39.776168] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23db340 is same with the state(5) to be set 00:17:02.331 [2024-07-15 11:35:39.776178] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23db340 is same with the state(5) to be set 00:17:02.331 [2024-07-15 11:35:39.776186] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23db340 is same with the state(5) to be set 00:17:02.331 [2024-07-15 11:35:39.776194] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23db340 is same with the state(5) to be set 00:17:02.331 11:35:39 nvmf_tcp.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:17:05.613 11:35:42 nvmf_tcp.nvmf_failover -- host/failover.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:05.613 [2024-07-15 11:35:43.046775] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:05.613 11:35:43 nvmf_tcp.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:17:07.013 11:35:44 nvmf_tcp.nvmf_failover -- host/failover.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:17:07.013 [2024-07-15 11:35:44.422711] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23dba20 is same with the state(5) to be set 00:17:07.013 [2024-07-15 11:35:44.422768] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23dba20 is same with the state(5) to be set 00:17:07.013 [2024-07-15 11:35:44.422780] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23dba20 is same with the state(5) to be set 00:17:07.013 [2024-07-15 11:35:44.422789] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23dba20 is same with the state(5) to be set 00:17:07.013 [2024-07-15 11:35:44.422798] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23dba20 is same with the state(5) to be set 00:17:07.013 [2024-07-15 11:35:44.422806] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23dba20 is same with the state(5) to be set 00:17:07.013 [2024-07-15 11:35:44.422814] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23dba20 is same with the state(5) to be set 00:17:07.013 [2024-07-15 11:35:44.422822] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23dba20 is same with the state(5) to be set 00:17:07.013 [2024-07-15 11:35:44.422830] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23dba20 is same with the state(5) to be set 00:17:07.013 [2024-07-15 11:35:44.422838] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23dba20 is same with the state(5) to be set 00:17:07.013 [2024-07-15 11:35:44.422847] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23dba20 is same with the state(5) to be set 00:17:07.013 [2024-07-15 11:35:44.422855] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23dba20 is same with the state(5) to be set 00:17:07.013 [2024-07-15 11:35:44.422863] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23dba20 is same with the state(5) to be set 00:17:07.013 [2024-07-15 11:35:44.422871] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23dba20 is same with the state(5) to be set 00:17:07.013 [2024-07-15 11:35:44.422879] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23dba20 is same with the state(5) to be set 00:17:07.013 [2024-07-15 11:35:44.422888] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23dba20 is same with the state(5) to be set 00:17:07.013 [2024-07-15 11:35:44.422896] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23dba20 is same with the state(5) to be set 00:17:07.013 [2024-07-15 11:35:44.422904] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23dba20 is same with the state(5) to be set 00:17:07.013 [2024-07-15 11:35:44.422912] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23dba20 is same with the state(5) to be set 00:17:07.013 [2024-07-15 11:35:44.422920] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23dba20 is same with the state(5) to be set 00:17:07.013 [2024-07-15 11:35:44.422928] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23dba20 is same with the state(5) to be set 00:17:07.013 [2024-07-15 11:35:44.422936] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23dba20 is same with the state(5) to be set 00:17:07.013 [2024-07-15 11:35:44.422945] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23dba20 is same with the state(5) to be set 00:17:07.013 [2024-07-15 11:35:44.422952] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23dba20 is same with the state(5) to be set 00:17:07.013 [2024-07-15 11:35:44.422960] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23dba20 is same with the state(5) to be set 00:17:07.013 [2024-07-15 11:35:44.422969] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23dba20 is same with the state(5) to be set 00:17:07.013 [2024-07-15 11:35:44.422977] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23dba20 is same with the state(5) to be set 00:17:07.013 [2024-07-15 11:35:44.422984] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23dba20 is same with the state(5) to be set 00:17:07.013 [2024-07-15 11:35:44.422992] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23dba20 is same with the state(5) to be set 00:17:07.013 [2024-07-15 11:35:44.423000] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23dba20 is same with the state(5) to be set 00:17:07.013 [2024-07-15 11:35:44.423009] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23dba20 is same with the state(5) to be set 00:17:07.013 [2024-07-15 11:35:44.423017] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23dba20 is same with the state(5) to be set 00:17:07.013 [2024-07-15 11:35:44.423025] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23dba20 is same with the state(5) to be set 00:17:07.013 [2024-07-15 11:35:44.423033] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23dba20 is same with the state(5) to be set 00:17:07.013 [2024-07-15 11:35:44.423041] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23dba20 is same with the state(5) to be set 00:17:07.013 [2024-07-15 11:35:44.423049] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23dba20 is same with the state(5) to be set 00:17:07.013 [2024-07-15 11:35:44.423057] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23dba20 is same with the state(5) to be set 00:17:07.013 [2024-07-15 11:35:44.423065] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23dba20 is same with the state(5) to be set 00:17:07.013 [2024-07-15 11:35:44.423075] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23dba20 is same with the state(5) to be set 00:17:07.013 [2024-07-15 11:35:44.423084] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23dba20 is same with the state(5) to be set 00:17:07.013 [2024-07-15 11:35:44.423092] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23dba20 is same with the state(5) to be set 00:17:07.013 [2024-07-15 11:35:44.423100] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23dba20 is same with the state(5) to be set 00:17:07.013 [2024-07-15 11:35:44.423108] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23dba20 is same with the state(5) to be set 00:17:07.013 [2024-07-15 11:35:44.423116] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23dba20 is same with the state(5) to be set 00:17:07.013 [2024-07-15 11:35:44.423124] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23dba20 is same with the state(5) to be set 00:17:07.013 [2024-07-15 11:35:44.423132] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23dba20 is same with the state(5) to be set 00:17:07.013 [2024-07-15 11:35:44.423140] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23dba20 is same with the state(5) to be set 00:17:07.013 [2024-07-15 11:35:44.423148] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23dba20 is same with the state(5) to be set 00:17:07.013 [2024-07-15 11:35:44.423155] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23dba20 is same with the state(5) to be set 00:17:07.013 [2024-07-15 11:35:44.423164] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23dba20 is same with the state(5) to be set 00:17:07.013 [2024-07-15 11:35:44.423171] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23dba20 is same with the state(5) to be set 00:17:07.013 [2024-07-15 11:35:44.423179] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23dba20 is same with the state(5) to be set 00:17:07.013 [2024-07-15 11:35:44.423188] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23dba20 is same with the state(5) to be set 00:17:07.013 [2024-07-15 11:35:44.423196] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23dba20 is same with the state(5) to be set 00:17:07.013 [2024-07-15 11:35:44.423204] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23dba20 is same with the state(5) to be set 00:17:07.013 [2024-07-15 11:35:44.423213] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23dba20 is same with the state(5) to be set 00:17:07.013 [2024-07-15 11:35:44.423221] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23dba20 is same with the state(5) to be set 00:17:07.014 [2024-07-15 11:35:44.423230] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23dba20 is same with the state(5) to be set 00:17:07.014 [2024-07-15 11:35:44.423238] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23dba20 is same with the state(5) to be set 00:17:07.014 [2024-07-15 11:35:44.423246] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23dba20 is same with the state(5) to be set 00:17:07.014 [2024-07-15 11:35:44.423254] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23dba20 is same with the state(5) to be set 00:17:07.014 [2024-07-15 11:35:44.423262] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23dba20 is same with the state(5) to be set 00:17:07.014 [2024-07-15 11:35:44.423270] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23dba20 is same with the state(5) to be set 00:17:07.014 [2024-07-15 11:35:44.423278] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23dba20 is same with the state(5) to be set 00:17:07.014 [2024-07-15 11:35:44.423286] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23dba20 is same with the state(5) to be set 00:17:07.014 [2024-07-15 11:35:44.423294] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23dba20 is same with the state(5) to be set 00:17:07.014 [2024-07-15 11:35:44.423302] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23dba20 is same with the state(5) to be set 00:17:07.014 [2024-07-15 11:35:44.423310] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23dba20 is same with the state(5) to be set 00:17:07.014 11:35:44 nvmf_tcp.nvmf_failover -- host/failover.sh@59 -- # wait 88032 00:17:12.276 0 00:17:12.276 11:35:49 nvmf_tcp.nvmf_failover -- host/failover.sh@61 -- # killprocess 87979 00:17:12.276 11:35:49 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@948 -- # '[' -z 87979 ']' 00:17:12.276 11:35:49 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # kill -0 87979 00:17:12.276 11:35:49 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # uname 00:17:12.276 11:35:49 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:12.276 11:35:49 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 87979 00:17:12.277 killing process with pid 87979 00:17:12.277 11:35:49 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:17:12.277 11:35:49 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:17:12.277 11:35:49 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@966 -- # echo 'killing process with pid 87979' 00:17:12.277 11:35:49 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@967 -- # kill 87979 00:17:12.277 11:35:49 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@972 -- # wait 87979 00:17:12.541 11:35:49 nvmf_tcp.nvmf_failover -- host/failover.sh@63 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:17:12.541 [2024-07-15 11:35:32.577668] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:17:12.541 [2024-07-15 11:35:32.577836] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87979 ] 00:17:12.541 [2024-07-15 11:35:32.722997] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:12.541 [2024-07-15 11:35:32.782675] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:12.541 Running I/O for 15 seconds... 00:17:12.541 [2024-07-15 11:35:35.952492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:88568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.541 [2024-07-15 11:35:35.952568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.541 [2024-07-15 11:35:35.952603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:88576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.541 [2024-07-15 11:35:35.952620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.541 [2024-07-15 11:35:35.952637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:88584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.541 [2024-07-15 11:35:35.952652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.541 [2024-07-15 11:35:35.952668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:88592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.541 [2024-07-15 11:35:35.952681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.541 [2024-07-15 11:35:35.952697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:88600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.541 [2024-07-15 11:35:35.952712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.541 [2024-07-15 11:35:35.952728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:88608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.541 [2024-07-15 11:35:35.952742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.541 [2024-07-15 11:35:35.952758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:88616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.541 [2024-07-15 11:35:35.952771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.541 [2024-07-15 11:35:35.952787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:88624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.541 [2024-07-15 11:35:35.952801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.541 [2024-07-15 11:35:35.952817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:88632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.541 [2024-07-15 11:35:35.952830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.541 [2024-07-15 11:35:35.952846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:88640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.541 [2024-07-15 11:35:35.952860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.541 [2024-07-15 11:35:35.952876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:88648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.541 [2024-07-15 11:35:35.952890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.541 [2024-07-15 11:35:35.952937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:88656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.541 [2024-07-15 11:35:35.952953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.541 [2024-07-15 11:35:35.952969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:88664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.541 [2024-07-15 11:35:35.952983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.541 [2024-07-15 11:35:35.952999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:88672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.541 [2024-07-15 11:35:35.953013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.541 [2024-07-15 11:35:35.953029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:88680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.541 [2024-07-15 11:35:35.953043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.541 [2024-07-15 11:35:35.953058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:88688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.541 [2024-07-15 11:35:35.953072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.542 [2024-07-15 11:35:35.953087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:88696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.542 [2024-07-15 11:35:35.953108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.542 [2024-07-15 11:35:35.953125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:88704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.542 [2024-07-15 11:35:35.953138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.542 [2024-07-15 11:35:35.953154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:88712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.542 [2024-07-15 11:35:35.953170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.542 [2024-07-15 11:35:35.953186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:88720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.542 [2024-07-15 11:35:35.953199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.542 [2024-07-15 11:35:35.953215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:88728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.542 [2024-07-15 11:35:35.953229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.542 [2024-07-15 11:35:35.953245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:88736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.542 [2024-07-15 11:35:35.953259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.542 [2024-07-15 11:35:35.953274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:88744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.542 [2024-07-15 11:35:35.953288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.542 [2024-07-15 11:35:35.953304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:88752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.542 [2024-07-15 11:35:35.953327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.542 [2024-07-15 11:35:35.953343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:88760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.542 [2024-07-15 11:35:35.953358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.542 [2024-07-15 11:35:35.953374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:88768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.542 [2024-07-15 11:35:35.953387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.542 [2024-07-15 11:35:35.953403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:88776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.542 [2024-07-15 11:35:35.953417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.542 [2024-07-15 11:35:35.953432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:88784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.542 [2024-07-15 11:35:35.953446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.542 [2024-07-15 11:35:35.953462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:88792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.542 [2024-07-15 11:35:35.953476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.542 [2024-07-15 11:35:35.953491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:88800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.542 [2024-07-15 11:35:35.953505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.542 [2024-07-15 11:35:35.953521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:88808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.542 [2024-07-15 11:35:35.953534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.542 [2024-07-15 11:35:35.953564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:88816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.542 [2024-07-15 11:35:35.953580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.542 [2024-07-15 11:35:35.953596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:88824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.542 [2024-07-15 11:35:35.953613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.542 [2024-07-15 11:35:35.953629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:88832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.542 [2024-07-15 11:35:35.953642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.542 [2024-07-15 11:35:35.953659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:88840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.542 [2024-07-15 11:35:35.953672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.542 [2024-07-15 11:35:35.953689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:88848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.542 [2024-07-15 11:35:35.953703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.542 [2024-07-15 11:35:35.953728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:88856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.542 [2024-07-15 11:35:35.953743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.542 [2024-07-15 11:35:35.953759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:88864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.542 [2024-07-15 11:35:35.953773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.542 [2024-07-15 11:35:35.953813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:88872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.542 [2024-07-15 11:35:35.953831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.542 [2024-07-15 11:35:35.953847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:88880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.542 [2024-07-15 11:35:35.953861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.542 [2024-07-15 11:35:35.953877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:88888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.542 [2024-07-15 11:35:35.953891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.542 [2024-07-15 11:35:35.953907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:88896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.542 [2024-07-15 11:35:35.953920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.542 [2024-07-15 11:35:35.953936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:88904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.542 [2024-07-15 11:35:35.953950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.542 [2024-07-15 11:35:35.953966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:88912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.542 [2024-07-15 11:35:35.953980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.542 [2024-07-15 11:35:35.953996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:88920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.542 [2024-07-15 11:35:35.954010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.542 [2024-07-15 11:35:35.954026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:88928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.542 [2024-07-15 11:35:35.954039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.542 [2024-07-15 11:35:35.954055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:88936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.542 [2024-07-15 11:35:35.954069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.542 [2024-07-15 11:35:35.954085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:88944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.542 [2024-07-15 11:35:35.954098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.542 [2024-07-15 11:35:35.954114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:88952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.542 [2024-07-15 11:35:35.954139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.542 [2024-07-15 11:35:35.954157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:88960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.542 [2024-07-15 11:35:35.954171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.542 [2024-07-15 11:35:35.954187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:88968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.542 [2024-07-15 11:35:35.954201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.542 [2024-07-15 11:35:35.954217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:88976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.542 [2024-07-15 11:35:35.954236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.542 [2024-07-15 11:35:35.954252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:88984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.542 [2024-07-15 11:35:35.954266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.542 [2024-07-15 11:35:35.954282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:88992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.543 [2024-07-15 11:35:35.954296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.543 [2024-07-15 11:35:35.954312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:89000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.543 [2024-07-15 11:35:35.954326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.543 [2024-07-15 11:35:35.954342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:89008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.543 [2024-07-15 11:35:35.954355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.543 [2024-07-15 11:35:35.954371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:89016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.543 [2024-07-15 11:35:35.954385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.543 [2024-07-15 11:35:35.954400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:89024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.543 [2024-07-15 11:35:35.954414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.543 [2024-07-15 11:35:35.954430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:89032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.543 [2024-07-15 11:35:35.954443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.543 [2024-07-15 11:35:35.954459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:89040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.543 [2024-07-15 11:35:35.954473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.543 [2024-07-15 11:35:35.954489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:89048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.543 [2024-07-15 11:35:35.954502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.543 [2024-07-15 11:35:35.954518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:89056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.543 [2024-07-15 11:35:35.954539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.543 [2024-07-15 11:35:35.954569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:89064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.543 [2024-07-15 11:35:35.954585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.543 [2024-07-15 11:35:35.954601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:89072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.543 [2024-07-15 11:35:35.954615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.543 [2024-07-15 11:35:35.954631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:89344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:12.543 [2024-07-15 11:35:35.954647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.543 [2024-07-15 11:35:35.954664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:89352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:12.543 [2024-07-15 11:35:35.954678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.543 [2024-07-15 11:35:35.954694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:89360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:12.543 [2024-07-15 11:35:35.954708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.543 [2024-07-15 11:35:35.954723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:89368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:12.543 [2024-07-15 11:35:35.954739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.543 [2024-07-15 11:35:35.954755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:89376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:12.543 [2024-07-15 11:35:35.954769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.543 [2024-07-15 11:35:35.954785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:89384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:12.543 [2024-07-15 11:35:35.954799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.543 [2024-07-15 11:35:35.954815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:89392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:12.543 [2024-07-15 11:35:35.954829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.543 [2024-07-15 11:35:35.954845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:89400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:12.543 [2024-07-15 11:35:35.954858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.543 [2024-07-15 11:35:35.954874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:89408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:12.543 [2024-07-15 11:35:35.954888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.543 [2024-07-15 11:35:35.954904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:89416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:12.543 [2024-07-15 11:35:35.954917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.543 [2024-07-15 11:35:35.954941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:89424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:12.543 [2024-07-15 11:35:35.954956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.543 [2024-07-15 11:35:35.954972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:89432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:12.543 [2024-07-15 11:35:35.954986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.543 [2024-07-15 11:35:35.955001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:89440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:12.543 [2024-07-15 11:35:35.955015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.543 [2024-07-15 11:35:35.955031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:89448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:12.543 [2024-07-15 11:35:35.955045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.543 [2024-07-15 11:35:35.955061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:89456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:12.543 [2024-07-15 11:35:35.955075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.543 [2024-07-15 11:35:35.955091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:89464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:12.543 [2024-07-15 11:35:35.955105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.543 [2024-07-15 11:35:35.955120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:89472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:12.543 [2024-07-15 11:35:35.955136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.543 [2024-07-15 11:35:35.955153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:89480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:12.543 [2024-07-15 11:35:35.955167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.543 [2024-07-15 11:35:35.955183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:89488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:12.543 [2024-07-15 11:35:35.955197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.543 [2024-07-15 11:35:35.955213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:89496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:12.543 [2024-07-15 11:35:35.955228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.543 [2024-07-15 11:35:35.955244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:89504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:12.543 [2024-07-15 11:35:35.955258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.543 [2024-07-15 11:35:35.955274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:89512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:12.543 [2024-07-15 11:35:35.955288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.543 [2024-07-15 11:35:35.955304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:89520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:12.543 [2024-07-15 11:35:35.955325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.543 [2024-07-15 11:35:35.955342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:89528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:12.543 [2024-07-15 11:35:35.955356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.543 [2024-07-15 11:35:35.955372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:89536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:12.543 [2024-07-15 11:35:35.955386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.543 [2024-07-15 11:35:35.955402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:89544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:12.543 [2024-07-15 11:35:35.955416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.543 [2024-07-15 11:35:35.955432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:89552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:12.543 [2024-07-15 11:35:35.955445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.543 [2024-07-15 11:35:35.955461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:89560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:12.543 [2024-07-15 11:35:35.955475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.543 [2024-07-15 11:35:35.955490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:89568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:12.543 [2024-07-15 11:35:35.955504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.543 [2024-07-15 11:35:35.955521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:89576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:12.543 [2024-07-15 11:35:35.955535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.543 [2024-07-15 11:35:35.955564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:89584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:12.543 [2024-07-15 11:35:35.955580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.543 [2024-07-15 11:35:35.955596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:89080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.544 [2024-07-15 11:35:35.955610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.544 [2024-07-15 11:35:35.955626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:89088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.544 [2024-07-15 11:35:35.955642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.544 [2024-07-15 11:35:35.955658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:89096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.544 [2024-07-15 11:35:35.955672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.544 [2024-07-15 11:35:35.955688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:89104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.544 [2024-07-15 11:35:35.955702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.544 [2024-07-15 11:35:35.955724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:89112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.544 [2024-07-15 11:35:35.955742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.544 [2024-07-15 11:35:35.955758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:89120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.544 [2024-07-15 11:35:35.955772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.544 [2024-07-15 11:35:35.955788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:89128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.544 [2024-07-15 11:35:35.955801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.544 [2024-07-15 11:35:35.955818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:89136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.544 [2024-07-15 11:35:35.955832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.544 [2024-07-15 11:35:35.955847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:89144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.544 [2024-07-15 11:35:35.955861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.544 [2024-07-15 11:35:35.955877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:89152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.544 [2024-07-15 11:35:35.955891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.544 [2024-07-15 11:35:35.955907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:89160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.544 [2024-07-15 11:35:35.955920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.544 [2024-07-15 11:35:35.955936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:89168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.544 [2024-07-15 11:35:35.955950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.544 [2024-07-15 11:35:35.955966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:89176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.544 [2024-07-15 11:35:35.955979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.544 [2024-07-15 11:35:35.955995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:89184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.544 [2024-07-15 11:35:35.956010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.544 [2024-07-15 11:35:35.956026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:89192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.544 [2024-07-15 11:35:35.956039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.544 [2024-07-15 11:35:35.956055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:89200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.544 [2024-07-15 11:35:35.956069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.544 [2024-07-15 11:35:35.956085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:89208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.544 [2024-07-15 11:35:35.956099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.544 [2024-07-15 11:35:35.956124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:89216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.544 [2024-07-15 11:35:35.956141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.544 [2024-07-15 11:35:35.956157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:89224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.544 [2024-07-15 11:35:35.956171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.544 [2024-07-15 11:35:35.956187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:89232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.544 [2024-07-15 11:35:35.956201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.544 [2024-07-15 11:35:35.956217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:89240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.544 [2024-07-15 11:35:35.956231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.544 [2024-07-15 11:35:35.956247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:89248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.544 [2024-07-15 11:35:35.956261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.544 [2024-07-15 11:35:35.956277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:89256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.544 [2024-07-15 11:35:35.956290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.544 [2024-07-15 11:35:35.956306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:89264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.544 [2024-07-15 11:35:35.956320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.544 [2024-07-15 11:35:35.956336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:89272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.544 [2024-07-15 11:35:35.956349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.544 [2024-07-15 11:35:35.956365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:89280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.544 [2024-07-15 11:35:35.956379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.544 [2024-07-15 11:35:35.956395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:89288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.544 [2024-07-15 11:35:35.956409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.544 [2024-07-15 11:35:35.956425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:89296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.544 [2024-07-15 11:35:35.956439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.544 [2024-07-15 11:35:35.956455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:89304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.544 [2024-07-15 11:35:35.956468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.544 [2024-07-15 11:35:35.956485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:89312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.544 [2024-07-15 11:35:35.956505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.544 [2024-07-15 11:35:35.956522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:89320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.544 [2024-07-15 11:35:35.956536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.544 [2024-07-15 11:35:35.956564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:89328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.544 [2024-07-15 11:35:35.956579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.544 [2024-07-15 11:35:35.956595] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1409c90 is same with the state(5) to be set 00:17:12.544 [2024-07-15 11:35:35.956616] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:12.544 [2024-07-15 11:35:35.956627] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:12.544 [2024-07-15 11:35:35.956640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:89336 len:8 PRP1 0x0 PRP2 0x0 00:17:12.544 [2024-07-15 11:35:35.956654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.544 [2024-07-15 11:35:35.956717] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1409c90 was disconnected and freed. reset controller. 00:17:12.544 [2024-07-15 11:35:35.956735] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:17:12.544 [2024-07-15 11:35:35.956824] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:17:12.544 [2024-07-15 11:35:35.956846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.544 [2024-07-15 11:35:35.956862] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:17:12.544 [2024-07-15 11:35:35.956876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.544 [2024-07-15 11:35:35.956890] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:17:12.544 [2024-07-15 11:35:35.956904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.544 [2024-07-15 11:35:35.956919] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:17:12.545 [2024-07-15 11:35:35.956932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.545 [2024-07-15 11:35:35.956946] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:17:12.545 [2024-07-15 11:35:35.961000] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:17:12.545 [2024-07-15 11:35:35.961066] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x138de30 (9): Bad file descriptor 00:17:12.545 [2024-07-15 11:35:35.994693] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:17:12.545 [2024-07-15 11:35:39.776210] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:17:12.545 [2024-07-15 11:35:39.776268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.545 [2024-07-15 11:35:39.776288] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:17:12.545 [2024-07-15 11:35:39.776329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.545 [2024-07-15 11:35:39.776344] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:17:12.545 [2024-07-15 11:35:39.776358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.545 [2024-07-15 11:35:39.776371] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:17:12.545 [2024-07-15 11:35:39.776385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.545 [2024-07-15 11:35:39.776399] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138de30 is same with the state(5) to be set 00:17:12.545 [2024-07-15 11:35:39.776501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:25928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.545 [2024-07-15 11:35:39.776524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.545 [2024-07-15 11:35:39.776566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:25936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.545 [2024-07-15 11:35:39.776585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.545 [2024-07-15 11:35:39.776602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:25944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.545 [2024-07-15 11:35:39.776617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.545 [2024-07-15 11:35:39.776632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:25952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.545 [2024-07-15 11:35:39.776646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.545 [2024-07-15 11:35:39.776662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:25960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.545 [2024-07-15 11:35:39.776675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.545 [2024-07-15 11:35:39.776691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:25968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.545 [2024-07-15 11:35:39.776705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.545 [2024-07-15 11:35:39.776720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:25976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.545 [2024-07-15 11:35:39.776734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.545 [2024-07-15 11:35:39.776749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:25984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.545 [2024-07-15 11:35:39.776763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.545 [2024-07-15 11:35:39.776778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:25992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.545 [2024-07-15 11:35:39.776792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.545 [2024-07-15 11:35:39.776808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:26000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.545 [2024-07-15 11:35:39.776835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.545 [2024-07-15 11:35:39.776852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:26008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.545 [2024-07-15 11:35:39.776867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.545 [2024-07-15 11:35:39.776882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:26016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.545 [2024-07-15 11:35:39.776897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.545 [2024-07-15 11:35:39.776913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:26024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.545 [2024-07-15 11:35:39.776927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.545 [2024-07-15 11:35:39.776943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:26032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.545 [2024-07-15 11:35:39.776957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.545 [2024-07-15 11:35:39.776974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:26040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.545 [2024-07-15 11:35:39.776988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.545 [2024-07-15 11:35:39.777005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:26048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.545 [2024-07-15 11:35:39.777018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.545 [2024-07-15 11:35:39.777034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:26056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.545 [2024-07-15 11:35:39.777048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.545 [2024-07-15 11:35:39.777064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:26064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.545 [2024-07-15 11:35:39.777078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.545 [2024-07-15 11:35:39.777093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:26072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.545 [2024-07-15 11:35:39.777108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.545 [2024-07-15 11:35:39.777123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:26080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.545 [2024-07-15 11:35:39.777137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.545 [2024-07-15 11:35:39.777153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:26088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.545 [2024-07-15 11:35:39.777167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.545 [2024-07-15 11:35:39.777183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:26096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.545 [2024-07-15 11:35:39.777196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.545 [2024-07-15 11:35:39.777220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:26104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.545 [2024-07-15 11:35:39.777235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.545 [2024-07-15 11:35:39.777251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:26112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.545 [2024-07-15 11:35:39.777265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.545 [2024-07-15 11:35:39.777281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:26120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.545 [2024-07-15 11:35:39.777295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.545 [2024-07-15 11:35:39.777311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:26128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.545 [2024-07-15 11:35:39.777325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.545 [2024-07-15 11:35:39.777341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:26136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.545 [2024-07-15 11:35:39.777354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.545 [2024-07-15 11:35:39.777371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:26144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.545 [2024-07-15 11:35:39.777385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.545 [2024-07-15 11:35:39.777401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.545 [2024-07-15 11:35:39.777415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.545 [2024-07-15 11:35:39.777431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:26160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.545 [2024-07-15 11:35:39.777444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.545 [2024-07-15 11:35:39.777461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:26168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.546 [2024-07-15 11:35:39.777475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.546 [2024-07-15 11:35:39.777490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:26176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.546 [2024-07-15 11:35:39.777504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.546 [2024-07-15 11:35:39.777520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:26184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.546 [2024-07-15 11:35:39.777533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.546 [2024-07-15 11:35:39.777561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:26312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:12.546 [2024-07-15 11:35:39.777578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.546 [2024-07-15 11:35:39.777594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:26320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:12.546 [2024-07-15 11:35:39.777608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.546 [2024-07-15 11:35:39.777642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:26328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:12.546 [2024-07-15 11:35:39.777657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.546 [2024-07-15 11:35:39.777672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:26336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:12.546 [2024-07-15 11:35:39.777686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.546 [2024-07-15 11:35:39.777702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:26344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:12.546 [2024-07-15 11:35:39.777716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.546 [2024-07-15 11:35:39.777732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:12.546 [2024-07-15 11:35:39.777746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.546 [2024-07-15 11:35:39.777761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:26360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:12.546 [2024-07-15 11:35:39.777776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.546 [2024-07-15 11:35:39.777809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:12.546 [2024-07-15 11:35:39.777824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.546 [2024-07-15 11:35:39.777840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:26376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:12.546 [2024-07-15 11:35:39.777854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.546 [2024-07-15 11:35:39.777870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:26384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:12.546 [2024-07-15 11:35:39.777883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.546 [2024-07-15 11:35:39.777905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:26392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:12.546 [2024-07-15 11:35:39.777921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.546 [2024-07-15 11:35:39.777942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:26400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:12.547 [2024-07-15 11:35:39.777956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.547 [2024-07-15 11:35:39.777971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:26408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:12.547 [2024-07-15 11:35:39.777985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.547 [2024-07-15 11:35:39.778002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:26416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:12.547 [2024-07-15 11:35:39.778015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.547 [2024-07-15 11:35:39.778031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:12.547 [2024-07-15 11:35:39.778053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.547 [2024-07-15 11:35:39.778070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:26432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:12.547 [2024-07-15 11:35:39.778085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.547 [2024-07-15 11:35:39.778101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:26440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:12.547 [2024-07-15 11:35:39.778115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.547 [2024-07-15 11:35:39.778131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:26448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:12.547 [2024-07-15 11:35:39.778144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.547 [2024-07-15 11:35:39.778160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:26456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:12.547 [2024-07-15 11:35:39.778174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.547 [2024-07-15 11:35:39.778190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:26464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:12.547 [2024-07-15 11:35:39.778204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.547 [2024-07-15 11:35:39.778220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:26472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:12.547 [2024-07-15 11:35:39.778234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.547 [2024-07-15 11:35:39.778249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:26480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:12.547 [2024-07-15 11:35:39.778263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.547 [2024-07-15 11:35:39.778279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:12.547 [2024-07-15 11:35:39.778292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.547 [2024-07-15 11:35:39.778308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:26496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:12.547 [2024-07-15 11:35:39.778323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.547 [2024-07-15 11:35:39.778338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:26504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:12.547 [2024-07-15 11:35:39.778352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.547 [2024-07-15 11:35:39.778368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:26512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:12.547 [2024-07-15 11:35:39.778382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.547 [2024-07-15 11:35:39.778400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:26520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:12.547 [2024-07-15 11:35:39.778414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.547 [2024-07-15 11:35:39.778438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:26528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:12.547 [2024-07-15 11:35:39.778453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.547 [2024-07-15 11:35:39.778469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:26536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:12.547 [2024-07-15 11:35:39.778483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.547 [2024-07-15 11:35:39.778499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:26544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:12.547 [2024-07-15 11:35:39.778513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.547 [2024-07-15 11:35:39.778528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:26552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:12.547 [2024-07-15 11:35:39.778542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.547 [2024-07-15 11:35:39.778571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:26560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:12.547 [2024-07-15 11:35:39.778586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.547 [2024-07-15 11:35:39.778602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:26568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:12.547 [2024-07-15 11:35:39.778616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.547 [2024-07-15 11:35:39.778631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:26576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:12.547 [2024-07-15 11:35:39.778645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.547 [2024-07-15 11:35:39.778661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:12.547 [2024-07-15 11:35:39.778675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.547 [2024-07-15 11:35:39.778690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:26592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:12.547 [2024-07-15 11:35:39.778704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.547 [2024-07-15 11:35:39.778720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:26600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:12.547 [2024-07-15 11:35:39.778734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.547 [2024-07-15 11:35:39.778750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:26608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:12.547 [2024-07-15 11:35:39.778763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.547 [2024-07-15 11:35:39.778779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:26616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:12.547 [2024-07-15 11:35:39.778794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.547 [2024-07-15 11:35:39.778809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:26624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:12.547 [2024-07-15 11:35:39.778830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.547 [2024-07-15 11:35:39.778848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:26632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:12.547 [2024-07-15 11:35:39.778862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.547 [2024-07-15 11:35:39.778878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:26640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:12.547 [2024-07-15 11:35:39.778892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.547 [2024-07-15 11:35:39.778910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:26648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:12.547 [2024-07-15 11:35:39.778924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.547 [2024-07-15 11:35:39.778940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:26656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:12.547 [2024-07-15 11:35:39.778954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.547 [2024-07-15 11:35:39.778970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:26664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:12.547 [2024-07-15 11:35:39.778984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.547 [2024-07-15 11:35:39.779000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:26672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:12.547 [2024-07-15 11:35:39.779014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.547 [2024-07-15 11:35:39.779029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:26680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:12.547 [2024-07-15 11:35:39.779043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.547 [2024-07-15 11:35:39.779059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:26688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:12.547 [2024-07-15 11:35:39.779073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.547 [2024-07-15 11:35:39.779088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:26696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:12.547 [2024-07-15 11:35:39.779102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.547 [2024-07-15 11:35:39.779118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:26704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:12.547 [2024-07-15 11:35:39.779132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.547 [2024-07-15 11:35:39.779147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:26712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:12.547 [2024-07-15 11:35:39.779164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.547 [2024-07-15 11:35:39.779180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:26720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:12.547 [2024-07-15 11:35:39.779194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.547 [2024-07-15 11:35:39.779209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:26728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:12.547 [2024-07-15 11:35:39.779230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.548 [2024-07-15 11:35:39.779246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:26736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:12.548 [2024-07-15 11:35:39.779261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.548 [2024-07-15 11:35:39.779276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:26744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:12.548 [2024-07-15 11:35:39.779290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.548 [2024-07-15 11:35:39.779306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:26752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:12.548 [2024-07-15 11:35:39.779320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.548 [2024-07-15 11:35:39.779336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:12.548 [2024-07-15 11:35:39.779350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.548 [2024-07-15 11:35:39.779366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:26768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:12.548 [2024-07-15 11:35:39.779380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.548 [2024-07-15 11:35:39.779399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:26776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:12.548 [2024-07-15 11:35:39.779413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.548 [2024-07-15 11:35:39.779428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:26784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:12.548 [2024-07-15 11:35:39.779443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.548 [2024-07-15 11:35:39.779459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:26792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:12.548 [2024-07-15 11:35:39.779473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.548 [2024-07-15 11:35:39.779488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:26800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:12.548 [2024-07-15 11:35:39.779502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.548 [2024-07-15 11:35:39.779518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:26808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:12.548 [2024-07-15 11:35:39.779532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.548 [2024-07-15 11:35:39.779557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:26816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:12.548 [2024-07-15 11:35:39.779573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.548 [2024-07-15 11:35:39.779589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:26824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:12.548 [2024-07-15 11:35:39.779603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.548 [2024-07-15 11:35:39.779627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:26832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:12.548 [2024-07-15 11:35:39.779642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.548 [2024-07-15 11:35:39.779658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:26840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:12.548 [2024-07-15 11:35:39.779674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.548 [2024-07-15 11:35:39.779690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:26848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:12.548 [2024-07-15 11:35:39.779704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.548 [2024-07-15 11:35:39.779720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:26856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:12.548 [2024-07-15 11:35:39.779734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.548 [2024-07-15 11:35:39.779750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:26864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:12.548 [2024-07-15 11:35:39.779763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.548 [2024-07-15 11:35:39.779779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:26872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:12.548 [2024-07-15 11:35:39.779793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.548 [2024-07-15 11:35:39.779809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:26880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:12.548 [2024-07-15 11:35:39.779823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.548 [2024-07-15 11:35:39.779839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:26888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:12.548 [2024-07-15 11:35:39.779853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.548 [2024-07-15 11:35:39.779869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:26896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:12.548 [2024-07-15 11:35:39.779883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.548 [2024-07-15 11:35:39.779901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:26904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:12.548 [2024-07-15 11:35:39.779915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.548 [2024-07-15 11:35:39.779932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:26912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:12.548 [2024-07-15 11:35:39.779946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.548 [2024-07-15 11:35:39.779961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:26920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:12.548 [2024-07-15 11:35:39.779975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.548 [2024-07-15 11:35:39.779991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:26928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:12.548 [2024-07-15 11:35:39.780012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.548 [2024-07-15 11:35:39.780029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:26936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:12.548 [2024-07-15 11:35:39.780043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.548 [2024-07-15 11:35:39.780059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:26944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:12.548 [2024-07-15 11:35:39.780073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.548 [2024-07-15 11:35:39.780088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:26192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.548 [2024-07-15 11:35:39.780103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.548 [2024-07-15 11:35:39.780119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:26200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.548 [2024-07-15 11:35:39.780133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.548 [2024-07-15 11:35:39.780149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:26208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.548 [2024-07-15 11:35:39.780165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.548 [2024-07-15 11:35:39.780182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:26216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.548 [2024-07-15 11:35:39.780196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.548 [2024-07-15 11:35:39.780211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:26224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.548 [2024-07-15 11:35:39.780225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.548 [2024-07-15 11:35:39.780241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:26232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.548 [2024-07-15 11:35:39.780255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.548 [2024-07-15 11:35:39.780272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:26240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.548 [2024-07-15 11:35:39.780285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.548 [2024-07-15 11:35:39.780301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:26248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.548 [2024-07-15 11:35:39.780315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.548 [2024-07-15 11:35:39.780332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:26256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.548 [2024-07-15 11:35:39.780346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.548 [2024-07-15 11:35:39.780362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:26264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.548 [2024-07-15 11:35:39.780376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.548 [2024-07-15 11:35:39.780393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:26272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.548 [2024-07-15 11:35:39.780414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.548 [2024-07-15 11:35:39.780431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:26280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.548 [2024-07-15 11:35:39.780446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.548 [2024-07-15 11:35:39.780462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:26288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.548 [2024-07-15 11:35:39.780476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.548 [2024-07-15 11:35:39.780492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:26296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.548 [2024-07-15 11:35:39.780506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.548 [2024-07-15 11:35:39.780555] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:12.549 [2024-07-15 11:35:39.780572] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:12.549 [2024-07-15 11:35:39.780584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:26304 len:8 PRP1 0x0 PRP2 0x0 00:17:12.549 [2024-07-15 11:35:39.780598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.549 [2024-07-15 11:35:39.780656] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x140bd90 was disconnected and freed. reset controller. 00:17:12.549 [2024-07-15 11:35:39.780673] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:17:12.549 [2024-07-15 11:35:39.780689] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:17:12.549 [2024-07-15 11:35:39.784683] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:17:12.549 [2024-07-15 11:35:39.784742] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x138de30 (9): Bad file descriptor 00:17:12.549 [2024-07-15 11:35:39.819684] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:17:12.549 [2024-07-15 11:35:44.423950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:78640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:12.549 [2024-07-15 11:35:44.424003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.549 [2024-07-15 11:35:44.424032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:78648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:12.549 [2024-07-15 11:35:44.424048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.549 [2024-07-15 11:35:44.424065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:78656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:12.549 [2024-07-15 11:35:44.424079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.549 [2024-07-15 11:35:44.424095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:78664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:12.549 [2024-07-15 11:35:44.424109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.549 [2024-07-15 11:35:44.424126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:78672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:12.549 [2024-07-15 11:35:44.424140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.549 [2024-07-15 11:35:44.424184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:78680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:12.549 [2024-07-15 11:35:44.424200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.549 [2024-07-15 11:35:44.424216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:78688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:12.549 [2024-07-15 11:35:44.424230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.549 [2024-07-15 11:35:44.424246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:78696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:12.549 [2024-07-15 11:35:44.424260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.549 [2024-07-15 11:35:44.424275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:78704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:12.549 [2024-07-15 11:35:44.424289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.549 [2024-07-15 11:35:44.424305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:78712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:12.549 [2024-07-15 11:35:44.424319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.549 [2024-07-15 11:35:44.424334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:78720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:12.549 [2024-07-15 11:35:44.424348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.549 [2024-07-15 11:35:44.424364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:78728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:12.549 [2024-07-15 11:35:44.424378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.549 [2024-07-15 11:35:44.424394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:78736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:12.549 [2024-07-15 11:35:44.424407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.549 [2024-07-15 11:35:44.424424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:78744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:12.549 [2024-07-15 11:35:44.424437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.549 [2024-07-15 11:35:44.424453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:78752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:12.549 [2024-07-15 11:35:44.424467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.549 [2024-07-15 11:35:44.424482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:78760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:12.549 [2024-07-15 11:35:44.424496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.549 [2024-07-15 11:35:44.424512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:78768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:12.549 [2024-07-15 11:35:44.424528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.549 [2024-07-15 11:35:44.424558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:78776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:12.549 [2024-07-15 11:35:44.424585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.549 [2024-07-15 11:35:44.424602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:78784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:12.549 [2024-07-15 11:35:44.424617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.549 [2024-07-15 11:35:44.424633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:78192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.549 [2024-07-15 11:35:44.424648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.549 [2024-07-15 11:35:44.424664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:78200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.549 [2024-07-15 11:35:44.424679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.549 [2024-07-15 11:35:44.424695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:78208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.549 [2024-07-15 11:35:44.424709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.549 [2024-07-15 11:35:44.424725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:78216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.549 [2024-07-15 11:35:44.424739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.549 [2024-07-15 11:35:44.424755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:78224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.549 [2024-07-15 11:35:44.424768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.549 [2024-07-15 11:35:44.424784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:78232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.549 [2024-07-15 11:35:44.424798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.549 [2024-07-15 11:35:44.424814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:78792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:12.549 [2024-07-15 11:35:44.424828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.549 [2024-07-15 11:35:44.424844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:78800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:12.549 [2024-07-15 11:35:44.424857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.549 [2024-07-15 11:35:44.424874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:78808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:12.549 [2024-07-15 11:35:44.424887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.549 [2024-07-15 11:35:44.424903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:78816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:12.549 [2024-07-15 11:35:44.424917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.549 [2024-07-15 11:35:44.424933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:78824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:12.549 [2024-07-15 11:35:44.424947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.549 [2024-07-15 11:35:44.424970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:78832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:12.549 [2024-07-15 11:35:44.424985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.549 [2024-07-15 11:35:44.425001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:78840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:12.549 [2024-07-15 11:35:44.425015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.549 [2024-07-15 11:35:44.425031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:78848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:12.549 [2024-07-15 11:35:44.425045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.549 [2024-07-15 11:35:44.425061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:78240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.549 [2024-07-15 11:35:44.425075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.549 [2024-07-15 11:35:44.425091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:78248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.549 [2024-07-15 11:35:44.425105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.549 [2024-07-15 11:35:44.425121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:78256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.549 [2024-07-15 11:35:44.425135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.549 [2024-07-15 11:35:44.425151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:78264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.549 [2024-07-15 11:35:44.425165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.549 [2024-07-15 11:35:44.425181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:78272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.549 [2024-07-15 11:35:44.425195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.549 [2024-07-15 11:35:44.425211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:78280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.550 [2024-07-15 11:35:44.425225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.550 [2024-07-15 11:35:44.425241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:78288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.550 [2024-07-15 11:35:44.425255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.550 [2024-07-15 11:35:44.425270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:78296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.550 [2024-07-15 11:35:44.425285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.550 [2024-07-15 11:35:44.425301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:78304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.550 [2024-07-15 11:35:44.425315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.550 [2024-07-15 11:35:44.425331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:78312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.550 [2024-07-15 11:35:44.425356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.550 [2024-07-15 11:35:44.425373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:78320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.550 [2024-07-15 11:35:44.425388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.550 [2024-07-15 11:35:44.425404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:78328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.550 [2024-07-15 11:35:44.425418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.550 [2024-07-15 11:35:44.425434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:78336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.550 [2024-07-15 11:35:44.425448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.550 [2024-07-15 11:35:44.425463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:78344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.550 [2024-07-15 11:35:44.425477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.550 [2024-07-15 11:35:44.425493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:78352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.550 [2024-07-15 11:35:44.425507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.550 [2024-07-15 11:35:44.425523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:78360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.550 [2024-07-15 11:35:44.425537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.550 [2024-07-15 11:35:44.425567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:78368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.550 [2024-07-15 11:35:44.425582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.550 [2024-07-15 11:35:44.425597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:78376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.550 [2024-07-15 11:35:44.425612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.550 [2024-07-15 11:35:44.425628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:78384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.550 [2024-07-15 11:35:44.425642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.550 [2024-07-15 11:35:44.425658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:78392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.550 [2024-07-15 11:35:44.425672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.550 [2024-07-15 11:35:44.425688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:78400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.550 [2024-07-15 11:35:44.425702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.550 [2024-07-15 11:35:44.425718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:78408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.550 [2024-07-15 11:35:44.425733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.550 [2024-07-15 11:35:44.425756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:78416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.550 [2024-07-15 11:35:44.425772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.550 [2024-07-15 11:35:44.425805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:78424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.550 [2024-07-15 11:35:44.425820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.550 [2024-07-15 11:35:44.425836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:78432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.550 [2024-07-15 11:35:44.425850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.550 [2024-07-15 11:35:44.425866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:78440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.550 [2024-07-15 11:35:44.425880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.550 [2024-07-15 11:35:44.425896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:78856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:12.550 [2024-07-15 11:35:44.425910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.550 [2024-07-15 11:35:44.425925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:78864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:12.550 [2024-07-15 11:35:44.425939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.550 [2024-07-15 11:35:44.425955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:78872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:12.550 [2024-07-15 11:35:44.425969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.550 [2024-07-15 11:35:44.425986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:78880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:12.550 [2024-07-15 11:35:44.426000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.550 [2024-07-15 11:35:44.426015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:78888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:12.550 [2024-07-15 11:35:44.426029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.550 [2024-07-15 11:35:44.426045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:78896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:12.550 [2024-07-15 11:35:44.426059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.550 [2024-07-15 11:35:44.426076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:78904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:12.550 [2024-07-15 11:35:44.426089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.550 [2024-07-15 11:35:44.426105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:12.550 [2024-07-15 11:35:44.426119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.550 [2024-07-15 11:35:44.426135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:78920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:12.550 [2024-07-15 11:35:44.426149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.550 [2024-07-15 11:35:44.426172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:78928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:12.550 [2024-07-15 11:35:44.426187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.550 [2024-07-15 11:35:44.426204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:78936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:12.550 [2024-07-15 11:35:44.426218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.550 [2024-07-15 11:35:44.426234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:78944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:12.550 [2024-07-15 11:35:44.426248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.550 [2024-07-15 11:35:44.426265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:78952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:12.550 [2024-07-15 11:35:44.426279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.550 [2024-07-15 11:35:44.426295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:78960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:12.550 [2024-07-15 11:35:44.426308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.550 [2024-07-15 11:35:44.426324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:78968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:12.550 [2024-07-15 11:35:44.426338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.550 [2024-07-15 11:35:44.426354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:78976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:12.551 [2024-07-15 11:35:44.426368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.551 [2024-07-15 11:35:44.426384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:78984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:12.551 [2024-07-15 11:35:44.426398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.551 [2024-07-15 11:35:44.426413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:78992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:12.551 [2024-07-15 11:35:44.426427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.551 [2024-07-15 11:35:44.426443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:79000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:12.551 [2024-07-15 11:35:44.426457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.551 [2024-07-15 11:35:44.426473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:79008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:12.551 [2024-07-15 11:35:44.426486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.551 [2024-07-15 11:35:44.426502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:79016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:12.551 [2024-07-15 11:35:44.426516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.551 [2024-07-15 11:35:44.426532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:79024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:12.551 [2024-07-15 11:35:44.426565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.551 [2024-07-15 11:35:44.426583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:79032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:12.551 [2024-07-15 11:35:44.426597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.551 [2024-07-15 11:35:44.426613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:79040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:12.551 [2024-07-15 11:35:44.426627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.551 [2024-07-15 11:35:44.426650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:79048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:12.551 [2024-07-15 11:35:44.426665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.551 [2024-07-15 11:35:44.426681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:79056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:12.551 [2024-07-15 11:35:44.426695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.551 [2024-07-15 11:35:44.426711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:79064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:12.551 [2024-07-15 11:35:44.426725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.551 [2024-07-15 11:35:44.426741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:79072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:12.551 [2024-07-15 11:35:44.426754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.551 [2024-07-15 11:35:44.426770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:79080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:12.551 [2024-07-15 11:35:44.426784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.551 [2024-07-15 11:35:44.426800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:79088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:12.551 [2024-07-15 11:35:44.426814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.551 [2024-07-15 11:35:44.426830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:79096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:12.551 [2024-07-15 11:35:44.426844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.551 [2024-07-15 11:35:44.426860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:79104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:12.551 [2024-07-15 11:35:44.426874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.551 [2024-07-15 11:35:44.426890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:79112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:12.551 [2024-07-15 11:35:44.426904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.551 [2024-07-15 11:35:44.426919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:79120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:12.551 [2024-07-15 11:35:44.426933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.551 [2024-07-15 11:35:44.426957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:79128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:12.551 [2024-07-15 11:35:44.426972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.551 [2024-07-15 11:35:44.426987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:79136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:12.551 [2024-07-15 11:35:44.427001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.551 [2024-07-15 11:35:44.427017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:79144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:12.551 [2024-07-15 11:35:44.427031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.551 [2024-07-15 11:35:44.427046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:79152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:12.551 [2024-07-15 11:35:44.427061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.551 [2024-07-15 11:35:44.427076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:79160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:12.551 [2024-07-15 11:35:44.427090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.551 [2024-07-15 11:35:44.427106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:79168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:12.551 [2024-07-15 11:35:44.427120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.551 [2024-07-15 11:35:44.427138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:79176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:12.551 [2024-07-15 11:35:44.427153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.551 [2024-07-15 11:35:44.427168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:79184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:12.551 [2024-07-15 11:35:44.427182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.551 [2024-07-15 11:35:44.427198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:79192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:12.551 [2024-07-15 11:35:44.427212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.551 [2024-07-15 11:35:44.427228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:79200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:12.551 [2024-07-15 11:35:44.427243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.551 [2024-07-15 11:35:44.427259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:79208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:12.551 [2024-07-15 11:35:44.427273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.551 [2024-07-15 11:35:44.427288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:78448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.551 [2024-07-15 11:35:44.427302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.551 [2024-07-15 11:35:44.427318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:78456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.551 [2024-07-15 11:35:44.427332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.551 [2024-07-15 11:35:44.427354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:78464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.551 [2024-07-15 11:35:44.427369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.551 [2024-07-15 11:35:44.427385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:78472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.551 [2024-07-15 11:35:44.427398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.551 [2024-07-15 11:35:44.427414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:78480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.551 [2024-07-15 11:35:44.427429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.551 [2024-07-15 11:35:44.427444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:78488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.551 [2024-07-15 11:35:44.427458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.551 [2024-07-15 11:35:44.427474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:78496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.551 [2024-07-15 11:35:44.427488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.551 [2024-07-15 11:35:44.427504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:78504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.551 [2024-07-15 11:35:44.427517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.551 [2024-07-15 11:35:44.427533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:78512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.551 [2024-07-15 11:35:44.427558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.551 [2024-07-15 11:35:44.427577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:78520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.551 [2024-07-15 11:35:44.427591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.551 [2024-07-15 11:35:44.427607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:78528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.551 [2024-07-15 11:35:44.427621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.551 [2024-07-15 11:35:44.427638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:78536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.551 [2024-07-15 11:35:44.427653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.551 [2024-07-15 11:35:44.427669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:78544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.551 [2024-07-15 11:35:44.427683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.551 [2024-07-15 11:35:44.427699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:78552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.552 [2024-07-15 11:35:44.427713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.552 [2024-07-15 11:35:44.427729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:78560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.552 [2024-07-15 11:35:44.427750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.552 [2024-07-15 11:35:44.427767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:78568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.552 [2024-07-15 11:35:44.427782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.552 [2024-07-15 11:35:44.427799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:78576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.552 [2024-07-15 11:35:44.427813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.552 [2024-07-15 11:35:44.427828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:78584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.552 [2024-07-15 11:35:44.427843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.552 [2024-07-15 11:35:44.427859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:78592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.552 [2024-07-15 11:35:44.427873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.552 [2024-07-15 11:35:44.427889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:78600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.552 [2024-07-15 11:35:44.427903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.552 [2024-07-15 11:35:44.427920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:78608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.552 [2024-07-15 11:35:44.427934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.552 [2024-07-15 11:35:44.427950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:78616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.552 [2024-07-15 11:35:44.427964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.552 [2024-07-15 11:35:44.427980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:78624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.552 [2024-07-15 11:35:44.427994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.552 [2024-07-15 11:35:44.428033] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:12.552 [2024-07-15 11:35:44.428048] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:12.552 [2024-07-15 11:35:44.428060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:78632 len:8 PRP1 0x0 PRP2 0x0 00:17:12.552 [2024-07-15 11:35:44.428074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.552 [2024-07-15 11:35:44.428133] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x140bb80 was disconnected and freed. reset controller. 00:17:12.552 [2024-07-15 11:35:44.428153] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:17:12.552 [2024-07-15 11:35:44.428226] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:17:12.552 [2024-07-15 11:35:44.428248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.552 [2024-07-15 11:35:44.428267] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:17:12.552 [2024-07-15 11:35:44.428291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.552 [2024-07-15 11:35:44.428307] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:17:12.552 [2024-07-15 11:35:44.428321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.552 [2024-07-15 11:35:44.428336] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:17:12.552 [2024-07-15 11:35:44.428349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.552 [2024-07-15 11:35:44.428363] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:17:12.552 [2024-07-15 11:35:44.428407] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x138de30 (9): Bad file descriptor 00:17:12.552 [2024-07-15 11:35:44.432418] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:17:12.552 [2024-07-15 11:35:44.471825] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:17:12.552 00:17:12.552 Latency(us) 00:17:12.552 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:12.552 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:17:12.552 Verification LBA range: start 0x0 length 0x4000 00:17:12.552 NVMe0n1 : 15.01 7775.51 30.37 206.61 0.00 16001.50 640.47 34555.35 00:17:12.552 =================================================================================================================== 00:17:12.552 Total : 7775.51 30.37 206.61 0.00 16001.50 640.47 34555.35 00:17:12.552 Received shutdown signal, test time was about 15.000000 seconds 00:17:12.552 00:17:12.552 Latency(us) 00:17:12.552 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:12.552 =================================================================================================================== 00:17:12.552 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:12.552 11:35:49 nvmf_tcp.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:17:12.552 11:35:49 nvmf_tcp.nvmf_failover -- host/failover.sh@65 -- # count=3 00:17:12.552 11:35:49 nvmf_tcp.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:17:12.552 11:35:49 nvmf_tcp.nvmf_failover -- host/failover.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:17:12.552 11:35:49 nvmf_tcp.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=88230 00:17:12.552 11:35:49 nvmf_tcp.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 88230 /var/tmp/bdevperf.sock 00:17:12.552 11:35:49 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@829 -- # '[' -z 88230 ']' 00:17:12.552 11:35:49 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:12.552 11:35:49 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:12.552 11:35:49 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:12.552 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:12.552 11:35:49 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:12.552 11:35:49 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:17:12.810 11:35:50 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:12.810 11:35:50 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@862 -- # return 0 00:17:12.810 11:35:50 nvmf_tcp.nvmf_failover -- host/failover.sh@76 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:17:13.066 [2024-07-15 11:35:50.514273] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:17:13.323 11:35:50 nvmf_tcp.nvmf_failover -- host/failover.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:17:13.593 [2024-07-15 11:35:50.938634] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:17:13.593 11:35:50 nvmf_tcp.nvmf_failover -- host/failover.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:17:13.873 NVMe0n1 00:17:14.130 11:35:51 nvmf_tcp.nvmf_failover -- host/failover.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:17:14.387 00:17:14.387 11:35:51 nvmf_tcp.nvmf_failover -- host/failover.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:17:14.645 00:17:14.645 11:35:52 nvmf_tcp.nvmf_failover -- host/failover.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:17:14.645 11:35:52 nvmf_tcp.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:17:14.903 11:35:52 nvmf_tcp.nvmf_failover -- host/failover.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:17:15.160 11:35:52 nvmf_tcp.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:17:18.437 11:35:55 nvmf_tcp.nvmf_failover -- host/failover.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:17:18.437 11:35:55 nvmf_tcp.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:17:18.437 11:35:55 nvmf_tcp.nvmf_failover -- host/failover.sh@89 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:17:18.437 11:35:55 nvmf_tcp.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=88359 00:17:18.437 11:35:55 nvmf_tcp.nvmf_failover -- host/failover.sh@92 -- # wait 88359 00:17:19.811 0 00:17:19.811 11:35:56 nvmf_tcp.nvmf_failover -- host/failover.sh@94 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:17:19.811 [2024-07-15 11:35:49.961931] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:17:19.812 [2024-07-15 11:35:49.962078] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88230 ] 00:17:19.812 [2024-07-15 11:35:50.102632] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:19.812 [2024-07-15 11:35:50.163222] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:19.812 [2024-07-15 11:35:52.491891] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:17:19.812 [2024-07-15 11:35:52.492012] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:17:19.812 [2024-07-15 11:35:52.492038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:19.812 [2024-07-15 11:35:52.492057] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:17:19.812 [2024-07-15 11:35:52.492071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:19.812 [2024-07-15 11:35:52.492084] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:17:19.812 [2024-07-15 11:35:52.492098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:19.812 [2024-07-15 11:35:52.492112] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:17:19.812 [2024-07-15 11:35:52.492125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:19.812 [2024-07-15 11:35:52.492138] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:17:19.812 [2024-07-15 11:35:52.492180] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:17:19.812 [2024-07-15 11:35:52.492213] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23b1e30 (9): Bad file descriptor 00:17:19.812 [2024-07-15 11:35:52.504950] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:17:19.812 Running I/O for 1 seconds... 00:17:19.812 00:17:19.812 Latency(us) 00:17:19.812 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:19.812 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:17:19.812 Verification LBA range: start 0x0 length 0x4000 00:17:19.812 NVMe0n1 : 1.01 8674.01 33.88 0.00 0.00 14655.69 1653.29 16443.58 00:17:19.812 =================================================================================================================== 00:17:19.812 Total : 8674.01 33.88 0.00 0.00 14655.69 1653.29 16443.58 00:17:19.812 11:35:56 nvmf_tcp.nvmf_failover -- host/failover.sh@95 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:17:19.812 11:35:56 nvmf_tcp.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:17:20.071 11:35:57 nvmf_tcp.nvmf_failover -- host/failover.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:17:20.330 11:35:57 nvmf_tcp.nvmf_failover -- host/failover.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:17:20.330 11:35:57 nvmf_tcp.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:17:20.588 11:35:58 nvmf_tcp.nvmf_failover -- host/failover.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:17:20.846 11:35:58 nvmf_tcp.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:17:24.166 11:36:01 nvmf_tcp.nvmf_failover -- host/failover.sh@103 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:17:24.166 11:36:01 nvmf_tcp.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:17:24.166 11:36:01 nvmf_tcp.nvmf_failover -- host/failover.sh@108 -- # killprocess 88230 00:17:24.166 11:36:01 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@948 -- # '[' -z 88230 ']' 00:17:24.166 11:36:01 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # kill -0 88230 00:17:24.166 11:36:01 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # uname 00:17:24.424 11:36:01 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:24.424 11:36:01 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 88230 00:17:24.424 killing process with pid 88230 00:17:24.424 11:36:01 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:17:24.424 11:36:01 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:17:24.424 11:36:01 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@966 -- # echo 'killing process with pid 88230' 00:17:24.424 11:36:01 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@967 -- # kill 88230 00:17:24.424 11:36:01 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@972 -- # wait 88230 00:17:24.424 11:36:01 nvmf_tcp.nvmf_failover -- host/failover.sh@110 -- # sync 00:17:24.424 11:36:01 nvmf_tcp.nvmf_failover -- host/failover.sh@111 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:24.991 11:36:02 nvmf_tcp.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:17:24.991 11:36:02 nvmf_tcp.nvmf_failover -- host/failover.sh@115 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:17:24.991 11:36:02 nvmf_tcp.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:17:24.991 11:36:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:24.991 11:36:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@117 -- # sync 00:17:24.991 11:36:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:24.991 11:36:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@120 -- # set +e 00:17:24.991 11:36:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:24.991 11:36:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:24.991 rmmod nvme_tcp 00:17:24.991 rmmod nvme_fabrics 00:17:24.991 rmmod nvme_keyring 00:17:24.991 11:36:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:24.991 11:36:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@124 -- # set -e 00:17:24.991 11:36:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@125 -- # return 0 00:17:24.991 11:36:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@489 -- # '[' -n 87861 ']' 00:17:24.991 11:36:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@490 -- # killprocess 87861 00:17:24.991 11:36:02 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@948 -- # '[' -z 87861 ']' 00:17:24.991 11:36:02 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # kill -0 87861 00:17:24.991 11:36:02 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # uname 00:17:24.991 11:36:02 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:24.991 11:36:02 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 87861 00:17:24.991 killing process with pid 87861 00:17:24.991 11:36:02 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:17:24.991 11:36:02 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:17:24.991 11:36:02 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@966 -- # echo 'killing process with pid 87861' 00:17:24.991 11:36:02 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@967 -- # kill 87861 00:17:24.991 11:36:02 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@972 -- # wait 87861 00:17:24.991 11:36:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:24.991 11:36:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:24.991 11:36:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:24.991 11:36:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:24.991 11:36:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:24.991 11:36:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:24.991 11:36:02 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:24.991 11:36:02 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:25.250 11:36:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:17:25.250 00:17:25.250 real 0m33.746s 00:17:25.250 user 2m12.446s 00:17:25.250 sys 0m4.760s 00:17:25.250 11:36:02 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:25.250 11:36:02 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:17:25.250 ************************************ 00:17:25.250 END TEST nvmf_failover 00:17:25.250 ************************************ 00:17:25.250 11:36:02 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:17:25.250 11:36:02 nvmf_tcp -- nvmf/nvmf.sh@101 -- # run_test nvmf_host_discovery /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:17:25.250 11:36:02 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:17:25.250 11:36:02 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:25.250 11:36:02 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:25.250 ************************************ 00:17:25.250 START TEST nvmf_host_discovery 00:17:25.250 ************************************ 00:17:25.250 11:36:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:17:25.250 * Looking for test storage... 00:17:25.250 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:17:25.250 11:36:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:25.250 11:36:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:17:25.250 11:36:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:25.250 11:36:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:25.250 11:36:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:25.250 11:36:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:25.250 11:36:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:25.250 11:36:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:25.250 11:36:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:25.250 11:36:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:25.250 11:36:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:25.250 11:36:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:25.250 11:36:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:891080d4-f96c-4735-b9e2-e3ce9892e421 00:17:25.250 11:36:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=891080d4-f96c-4735-b9e2-e3ce9892e421 00:17:25.250 11:36:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:25.250 11:36:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:25.250 11:36:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:25.250 11:36:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:25.250 11:36:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:25.250 11:36:02 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:25.250 11:36:02 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:25.250 11:36:02 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:25.250 11:36:02 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:25.250 11:36:02 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:25.250 11:36:02 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:25.250 11:36:02 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:17:25.250 11:36:02 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:25.250 11:36:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@47 -- # : 0 00:17:25.250 11:36:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:25.250 11:36:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:25.250 11:36:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:25.250 11:36:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:25.250 11:36:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:25.250 11:36:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:25.250 11:36:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:25.250 11:36:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:25.250 11:36:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:17:25.250 11:36:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:17:25.250 11:36:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:17:25.250 11:36:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:17:25.250 11:36:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:17:25.250 11:36:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:17:25.250 11:36:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:17:25.250 11:36:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:25.250 11:36:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:25.250 11:36:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:25.250 11:36:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:25.250 11:36:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:25.250 11:36:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:25.250 11:36:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:25.250 11:36:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:25.250 11:36:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:17:25.250 11:36:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:17:25.250 11:36:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:17:25.250 11:36:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:17:25.250 11:36:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:17:25.250 11:36:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@432 -- # nvmf_veth_init 00:17:25.250 11:36:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:25.250 11:36:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:25.250 11:36:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:17:25.250 11:36:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:17:25.250 11:36:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:17:25.250 11:36:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:17:25.250 11:36:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:17:25.250 11:36:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:25.250 11:36:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:17:25.250 11:36:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:17:25.250 11:36:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:17:25.250 11:36:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:17:25.250 11:36:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:17:25.250 11:36:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:17:25.250 Cannot find device "nvmf_tgt_br" 00:17:25.250 11:36:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@155 -- # true 00:17:25.250 11:36:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:17:25.250 Cannot find device "nvmf_tgt_br2" 00:17:25.250 11:36:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@156 -- # true 00:17:25.250 11:36:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:17:25.250 11:36:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:17:25.250 Cannot find device "nvmf_tgt_br" 00:17:25.250 11:36:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@158 -- # true 00:17:25.251 11:36:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:17:25.251 Cannot find device "nvmf_tgt_br2" 00:17:25.251 11:36:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@159 -- # true 00:17:25.251 11:36:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:17:25.509 11:36:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:17:25.509 11:36:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:25.509 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:25.509 11:36:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@162 -- # true 00:17:25.509 11:36:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:25.509 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:25.509 11:36:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@163 -- # true 00:17:25.509 11:36:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:17:25.509 11:36:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:17:25.509 11:36:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:17:25.510 11:36:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:17:25.510 11:36:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:17:25.510 11:36:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:17:25.510 11:36:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:17:25.510 11:36:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:17:25.510 11:36:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:17:25.510 11:36:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:17:25.510 11:36:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:17:25.510 11:36:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:17:25.510 11:36:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:17:25.510 11:36:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:25.510 11:36:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:17:25.510 11:36:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:17:25.510 11:36:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:17:25.510 11:36:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:17:25.510 11:36:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:17:25.510 11:36:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:17:25.510 11:36:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:17:25.510 11:36:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:17:25.510 11:36:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:17:25.510 11:36:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:17:25.510 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:25.510 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.138 ms 00:17:25.510 00:17:25.510 --- 10.0.0.2 ping statistics --- 00:17:25.510 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:25.510 rtt min/avg/max/mdev = 0.138/0.138/0.138/0.000 ms 00:17:25.510 11:36:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:17:25.510 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:17:25.510 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.117 ms 00:17:25.510 00:17:25.510 --- 10.0.0.3 ping statistics --- 00:17:25.510 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:25.510 rtt min/avg/max/mdev = 0.117/0.117/0.117/0.000 ms 00:17:25.510 11:36:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:17:25.510 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:25.510 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.052 ms 00:17:25.510 00:17:25.510 --- 10.0.0.1 ping statistics --- 00:17:25.510 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:25.510 rtt min/avg/max/mdev = 0.052/0.052/0.052/0.000 ms 00:17:25.510 11:36:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:25.510 11:36:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@433 -- # return 0 00:17:25.510 11:36:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:25.510 11:36:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:25.510 11:36:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:25.510 11:36:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:25.510 11:36:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:25.510 11:36:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:25.510 11:36:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:25.769 11:36:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:17:25.769 11:36:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:25.769 11:36:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:25.769 11:36:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:25.769 11:36:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@481 -- # nvmfpid=88663 00:17:25.769 11:36:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:17:25.769 11:36:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@482 -- # waitforlisten 88663 00:17:25.769 11:36:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@829 -- # '[' -z 88663 ']' 00:17:25.769 11:36:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:25.769 11:36:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:25.769 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:25.769 11:36:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:25.769 11:36:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:25.769 11:36:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:25.769 [2024-07-15 11:36:03.053704] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:17:25.769 [2024-07-15 11:36:03.054440] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:25.769 [2024-07-15 11:36:03.190415] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:26.027 [2024-07-15 11:36:03.251359] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:26.027 [2024-07-15 11:36:03.251420] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:26.027 [2024-07-15 11:36:03.251432] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:26.027 [2024-07-15 11:36:03.251440] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:26.027 [2024-07-15 11:36:03.251448] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:26.027 [2024-07-15 11:36:03.251476] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:26.593 11:36:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:26.593 11:36:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@862 -- # return 0 00:17:26.593 11:36:03 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:26.593 11:36:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:26.593 11:36:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:26.593 11:36:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:26.593 11:36:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:26.593 11:36:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:26.593 11:36:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:26.593 [2024-07-15 11:36:04.034761] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:26.593 11:36:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:26.593 11:36:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:17:26.593 11:36:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:26.593 11:36:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:26.593 [2024-07-15 11:36:04.042886] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:17:26.593 11:36:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:26.593 11:36:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:17:26.593 11:36:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:26.593 11:36:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:26.593 null0 00:17:26.593 11:36:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:26.593 11:36:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:17:26.593 11:36:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:26.593 11:36:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:26.593 null1 00:17:26.593 11:36:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:26.593 11:36:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:17:26.593 11:36:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:26.593 11:36:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:26.854 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:17:26.854 11:36:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:26.854 11:36:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=88713 00:17:26.854 11:36:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:17:26.854 11:36:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 88713 /tmp/host.sock 00:17:26.854 11:36:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@829 -- # '[' -z 88713 ']' 00:17:26.854 11:36:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@833 -- # local rpc_addr=/tmp/host.sock 00:17:26.854 11:36:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:26.854 11:36:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:17:26.854 11:36:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:26.854 11:36:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:26.854 [2024-07-15 11:36:04.143015] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:17:26.854 [2024-07-15 11:36:04.144268] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88713 ] 00:17:26.854 [2024-07-15 11:36:04.285940] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:27.112 [2024-07-15 11:36:04.347002] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:27.677 11:36:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:27.677 11:36:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@862 -- # return 0 00:17:27.677 11:36:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:27.677 11:36:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:17:27.677 11:36:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:27.677 11:36:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:27.677 11:36:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:27.937 11:36:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:17:27.937 11:36:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:27.937 11:36:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:27.937 11:36:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:27.937 11:36:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:17:27.937 11:36:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:17:27.937 11:36:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:17:27.937 11:36:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:17:27.937 11:36:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:17:27.937 11:36:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:27.937 11:36:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:17:27.937 11:36:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:27.937 11:36:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:27.937 11:36:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:17:27.937 11:36:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:17:27.937 11:36:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:17:27.937 11:36:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:27.937 11:36:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:27.937 11:36:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:17:27.937 11:36:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:17:27.937 11:36:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:17:27.937 11:36:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:27.937 11:36:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:17:27.937 11:36:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:17:27.937 11:36:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:27.937 11:36:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:27.937 11:36:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:27.937 11:36:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:17:27.937 11:36:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:17:27.937 11:36:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:17:27.937 11:36:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:17:27.937 11:36:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:17:27.937 11:36:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:27.937 11:36:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:27.937 11:36:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:27.937 11:36:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:17:27.937 11:36:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:17:27.937 11:36:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:17:27.937 11:36:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:27.937 11:36:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:27.937 11:36:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:17:27.937 11:36:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:17:27.937 11:36:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:17:27.937 11:36:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:27.937 11:36:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:17:27.937 11:36:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:17:27.937 11:36:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:27.937 11:36:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:27.937 11:36:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:27.937 11:36:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:17:27.937 11:36:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:17:27.937 11:36:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:27.937 11:36:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:17:27.937 11:36:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:27.937 11:36:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:17:27.937 11:36:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:17:28.196 11:36:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:28.196 11:36:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:17:28.196 11:36:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:17:28.196 11:36:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:17:28.196 11:36:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:17:28.196 11:36:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:17:28.196 11:36:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:28.196 11:36:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:28.196 11:36:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:17:28.196 11:36:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:28.196 11:36:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:17:28.196 11:36:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:17:28.196 11:36:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:28.196 11:36:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:28.196 [2024-07-15 11:36:05.507269] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:28.196 11:36:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:28.196 11:36:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:17:28.196 11:36:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:17:28.196 11:36:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:17:28.196 11:36:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:28.196 11:36:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:28.196 11:36:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:17:28.196 11:36:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:17:28.196 11:36:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:28.196 11:36:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:17:28.196 11:36:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:17:28.196 11:36:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:17:28.196 11:36:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:28.196 11:36:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:17:28.196 11:36:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:28.196 11:36:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:17:28.196 11:36:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:17:28.196 11:36:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:28.196 11:36:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:17:28.196 11:36:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:17:28.196 11:36:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:17:28.196 11:36:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:17:28.196 11:36:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:17:28.196 11:36:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:17:28.196 11:36:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:17:28.196 11:36:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:17:28.196 11:36:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:17:28.196 11:36:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:17:28.196 11:36:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:17:28.196 11:36:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:28.196 11:36:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:28.196 11:36:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:28.196 11:36:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:17:28.196 11:36:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:17:28.196 11:36:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:17:28.196 11:36:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:17:28.196 11:36:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:17:28.196 11:36:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:28.196 11:36:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:28.455 11:36:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:28.455 11:36:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:17:28.455 11:36:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:17:28.455 11:36:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:17:28.455 11:36:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:17:28.455 11:36:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:17:28.455 11:36:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:17:28.455 11:36:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:17:28.455 11:36:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:17:28.455 11:36:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:28.455 11:36:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:28.455 11:36:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:17:28.455 11:36:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:17:28.455 11:36:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:28.455 11:36:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ '' == \n\v\m\e\0 ]] 00:17:28.455 11:36:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@918 -- # sleep 1 00:17:28.713 [2024-07-15 11:36:06.168016] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:17:28.713 [2024-07-15 11:36:06.168059] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:17:28.713 [2024-07-15 11:36:06.168080] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:17:28.971 [2024-07-15 11:36:06.254593] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:17:28.971 [2024-07-15 11:36:06.311713] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:17:28.971 [2024-07-15 11:36:06.311766] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:17:29.538 11:36:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:17:29.538 11:36:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:17:29.538 11:36:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:17:29.538 11:36:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:17:29.538 11:36:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:17:29.538 11:36:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:29.538 11:36:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:17:29.538 11:36:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:29.538 11:36:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:17:29.538 11:36:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:29.538 11:36:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:29.538 11:36:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:17:29.538 11:36:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:17:29.538 11:36:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:17:29.538 11:36:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:17:29.538 11:36:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:17:29.538 11:36:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:17:29.538 11:36:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:17:29.538 11:36:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:17:29.538 11:36:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:17:29.538 11:36:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:17:29.539 11:36:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:29.539 11:36:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:29.539 11:36:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:17:29.539 11:36:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:29.539 11:36:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:17:29.539 11:36:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:17:29.539 11:36:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:17:29.539 11:36:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:17:29.539 11:36:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:17:29.539 11:36:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:17:29.539 11:36:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:17:29.539 11:36:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:17:29.539 11:36:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:17:29.539 11:36:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:17:29.539 11:36:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:29.539 11:36:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:29.539 11:36:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:17:29.539 11:36:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:17:29.539 11:36:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:29.539 11:36:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4420 == \4\4\2\0 ]] 00:17:29.539 11:36:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:17:29.539 11:36:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:17:29.539 11:36:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:17:29.539 11:36:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:17:29.539 11:36:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:17:29.539 11:36:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:17:29.539 11:36:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:17:29.539 11:36:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:17:29.539 11:36:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:17:29.539 11:36:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:17:29.539 11:36:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:17:29.539 11:36:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:29.539 11:36:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:29.539 11:36:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:29.539 11:36:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:17:29.539 11:36:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:17:29.539 11:36:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:17:29.539 11:36:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:17:29.539 11:36:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:17:29.539 11:36:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:29.539 11:36:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:29.539 11:36:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:29.539 11:36:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:17:29.539 11:36:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:17:29.539 11:36:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:17:29.539 11:36:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:17:29.539 11:36:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:17:29.539 11:36:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:17:29.539 11:36:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:17:29.539 11:36:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:17:29.539 11:36:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:29.539 11:36:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:29.539 11:36:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:17:29.539 11:36:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:17:29.797 11:36:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:29.797 11:36:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:17:29.797 11:36:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:17:29.797 11:36:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:17:29.797 11:36:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:17:29.797 11:36:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:17:29.797 11:36:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:17:29.797 11:36:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:17:29.797 11:36:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:17:29.797 11:36:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:17:29.797 11:36:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:17:29.797 11:36:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:17:29.797 11:36:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:17:29.797 11:36:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:29.797 11:36:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:29.797 11:36:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:29.798 11:36:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:17:29.798 11:36:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:17:29.798 11:36:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:17:29.798 11:36:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:17:29.798 11:36:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:17:29.798 11:36:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:29.798 11:36:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:29.798 [2024-07-15 11:36:07.125138] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:17:29.798 [2024-07-15 11:36:07.125650] bdev_nvme.c:6965:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:17:29.798 [2024-07-15 11:36:07.125695] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:17:29.798 11:36:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:29.798 11:36:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:17:29.798 11:36:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:17:29.798 11:36:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:17:29.798 11:36:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:17:29.798 11:36:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:17:29.798 11:36:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:17:29.798 11:36:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:17:29.798 11:36:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:29.798 11:36:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:17:29.798 11:36:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:29.798 11:36:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:17:29.798 11:36:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:17:29.798 11:36:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:29.798 11:36:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:29.798 11:36:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:17:29.798 11:36:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:17:29.798 11:36:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:17:29.798 11:36:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:17:29.798 11:36:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:17:29.798 11:36:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:17:29.798 11:36:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:17:29.798 11:36:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:17:29.798 11:36:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:29.798 11:36:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:17:29.798 11:36:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:29.798 11:36:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:17:29.798 11:36:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:17:29.798 [2024-07-15 11:36:07.213134] bdev_nvme.c:6907:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:17:29.798 11:36:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:29.798 [2024-07-15 11:36:07.271468] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:17:29.798 [2024-07-15 11:36:07.271506] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:17:29.798 [2024-07-15 11:36:07.271514] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:17:30.057 11:36:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:17:30.057 11:36:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:17:30.057 11:36:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:17:30.057 11:36:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:17:30.057 11:36:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:17:30.057 11:36:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:17:30.057 11:36:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:17:30.057 11:36:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:17:30.057 11:36:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:17:30.057 11:36:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:30.057 11:36:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:17:30.057 11:36:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:30.057 11:36:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:17:30.057 11:36:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:17:30.057 11:36:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:30.057 11:36:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:17:30.057 11:36:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:17:30.057 11:36:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:17:30.057 11:36:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:17:30.057 11:36:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:17:30.057 11:36:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:17:30.057 11:36:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:17:30.057 11:36:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:17:30.057 11:36:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:17:30.057 11:36:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:17:30.057 11:36:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:17:30.057 11:36:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:17:30.057 11:36:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:30.057 11:36:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:30.057 11:36:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:30.057 11:36:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:17:30.057 11:36:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:17:30.057 11:36:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:17:30.057 11:36:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:17:30.057 11:36:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:17:30.057 11:36:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:30.057 11:36:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:30.057 [2024-07-15 11:36:07.373308] bdev_nvme.c:6965:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:17:30.057 [2024-07-15 11:36:07.373349] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:17:30.057 11:36:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:30.057 11:36:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:17:30.057 11:36:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:17:30.057 11:36:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:17:30.057 11:36:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:17:30.057 11:36:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:17:30.057 [2024-07-15 11:36:07.379098] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:17:30.057 [2024-07-15 11:36:07.379132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:30.057 [2024-07-15 11:36:07.379147] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:17:30.057 [2024-07-15 11:36:07.379156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:30.057 [2024-07-15 11:36:07.379166] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:17:30.057 [2024-07-15 11:36:07.379176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:30.057 [2024-07-15 11:36:07.379186] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:17:30.057 [2024-07-15 11:36:07.379195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:30.057 [2024-07-15 11:36:07.379204] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xad7c50 is same with the state(5) to be set 00:17:30.057 11:36:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:17:30.057 11:36:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:17:30.057 11:36:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:17:30.057 11:36:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:30.057 11:36:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:17:30.057 11:36:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:30.057 11:36:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:17:30.057 [2024-07-15 11:36:07.389046] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xad7c50 (9): Bad file descriptor 00:17:30.057 [2024-07-15 11:36:07.399070] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controlle 11:36:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:30.057 r 00:17:30.057 [2024-07-15 11:36:07.399244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:17:30.058 [2024-07-15 11:36:07.399271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad7c50 with addr=10.0.0.2, port=4420 00:17:30.058 [2024-07-15 11:36:07.399285] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xad7c50 is same with the state(5) to be set 00:17:30.058 [2024-07-15 11:36:07.399306] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xad7c50 (9): Bad file descriptor 00:17:30.058 [2024-07-15 11:36:07.399335] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:17:30.058 [2024-07-15 11:36:07.399346] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:17:30.058 [2024-07-15 11:36:07.399358] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:17:30.058 [2024-07-15 11:36:07.399375] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:17:30.058 [2024-07-15 11:36:07.409159] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:17:30.058 [2024-07-15 11:36:07.409312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:17:30.058 [2024-07-15 11:36:07.409338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad7c50 with addr=10.0.0.2, port=4420 00:17:30.058 [2024-07-15 11:36:07.409351] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xad7c50 is same with the state(5) to be set 00:17:30.058 [2024-07-15 11:36:07.409371] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xad7c50 (9): Bad file descriptor 00:17:30.058 [2024-07-15 11:36:07.409399] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:17:30.058 [2024-07-15 11:36:07.409411] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:17:30.058 [2024-07-15 11:36:07.409422] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:17:30.058 [2024-07-15 11:36:07.409438] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:17:30.058 [2024-07-15 11:36:07.419242] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:17:30.058 [2024-07-15 11:36:07.419385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:17:30.058 [2024-07-15 11:36:07.419412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad7c50 with addr=10.0.0.2, port=4420 00:17:30.058 [2024-07-15 11:36:07.419425] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xad7c50 is same with the state(5) to be set 00:17:30.058 [2024-07-15 11:36:07.419445] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xad7c50 (9): Bad file descriptor 00:17:30.058 [2024-07-15 11:36:07.419488] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:17:30.058 [2024-07-15 11:36:07.419502] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:17:30.058 [2024-07-15 11:36:07.419513] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:17:30.058 [2024-07-15 11:36:07.419529] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:17:30.058 [2024-07-15 11:36:07.429328] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:17:30.058 [2024-07-15 11:36:07.429484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:17:30.058 [2024-07-15 11:36:07.429509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad7c50 with addr=10.0.0.2, port=4420 00:17:30.058 [2024-07-15 11:36:07.429521] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xad7c50 is same with the state(5) to be set 00:17:30.058 [2024-07-15 11:36:07.429540] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xad7c50 (9): Bad file descriptor 00:17:30.058 [2024-07-15 11:36:07.429583] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:17:30.058 [2024-07-15 11:36:07.429596] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:17:30.058 [2024-07-15 11:36:07.429607] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:17:30.058 [2024-07-15 11:36:07.429624] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:17:30.058 [2024-07-15 11:36:07.439418] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:17:30.058 [2024-07-15 11:36:07.439569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:17:30.058 [2024-07-15 11:36:07.439595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad7c50 with addr=10.0.0.2, port=4420 00:17:30.058 [2024-07-15 11:36:07.439608] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xad7c50 is same with the state(5) to be set 00:17:30.058 [2024-07-15 11:36:07.439628] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xad7c50 (9): Bad file descriptor 00:17:30.058 [2024-07-15 11:36:07.439658] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:17:30.058 [2024-07-15 11:36:07.439669] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:17:30.058 [2024-07-15 11:36:07.439680] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:17:30.058 [2024-07-15 11:36:07.439696] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:17:30.058 11:36:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:30.058 11:36:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:17:30.058 11:36:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:17:30.058 11:36:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:17:30.058 11:36:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:17:30.058 11:36:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:17:30.058 11:36:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:17:30.058 11:36:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:17:30.058 11:36:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:17:30.058 11:36:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:30.058 11:36:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:17:30.058 11:36:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:30.058 11:36:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:17:30.058 11:36:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:17:30.058 [2024-07-15 11:36:07.449490] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:17:30.058 [2024-07-15 11:36:07.449633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:17:30.058 [2024-07-15 11:36:07.449658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad7c50 with addr=10.0.0.2, port=4420 00:17:30.058 [2024-07-15 11:36:07.449671] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xad7c50 is same with the state(5) to be set 00:17:30.058 [2024-07-15 11:36:07.449690] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xad7c50 (9): Bad file descriptor 00:17:30.058 [2024-07-15 11:36:07.449706] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:17:30.058 [2024-07-15 11:36:07.449716] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:17:30.058 [2024-07-15 11:36:07.449726] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:17:30.058 [2024-07-15 11:36:07.449742] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:17:30.058 [2024-07-15 11:36:07.459394] bdev_nvme.c:6770:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:17:30.058 [2024-07-15 11:36:07.459439] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:17:30.058 11:36:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:30.058 11:36:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:17:30.058 11:36:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:17:30.058 11:36:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:17:30.058 11:36:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:17:30.058 11:36:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:17:30.058 11:36:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:17:30.058 11:36:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:17:30.058 11:36:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:17:30.058 11:36:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:17:30.058 11:36:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:17:30.058 11:36:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:17:30.058 11:36:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:17:30.058 11:36:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:30.058 11:36:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:30.058 11:36:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:30.316 11:36:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4421 == \4\4\2\1 ]] 00:17:30.316 11:36:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:17:30.316 11:36:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:17:30.316 11:36:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:17:30.316 11:36:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:17:30.316 11:36:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:17:30.316 11:36:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:17:30.316 11:36:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:17:30.316 11:36:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:17:30.316 11:36:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:17:30.316 11:36:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:17:30.316 11:36:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:17:30.316 11:36:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:30.316 11:36:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:30.316 11:36:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:30.316 11:36:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:17:30.316 11:36:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:17:30.316 11:36:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:17:30.316 11:36:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:17:30.316 11:36:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:17:30.316 11:36:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:30.316 11:36:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:30.316 11:36:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:30.316 11:36:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:17:30.316 11:36:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:17:30.316 11:36:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:17:30.316 11:36:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:17:30.316 11:36:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:17:30.316 11:36:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:17:30.316 11:36:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:17:30.316 11:36:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:30.316 11:36:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:30.316 11:36:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:17:30.316 11:36:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:17:30.316 11:36:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:17:30.316 11:36:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:30.316 11:36:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ '' == '' ]] 00:17:30.316 11:36:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:17:30.316 11:36:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:17:30.316 11:36:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:17:30.316 11:36:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:17:30.316 11:36:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:17:30.316 11:36:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:17:30.316 11:36:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:17:30.316 11:36:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:17:30.316 11:36:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:17:30.316 11:36:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:17:30.316 11:36:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:30.316 11:36:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:30.316 11:36:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:17:30.316 11:36:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:30.316 11:36:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ '' == '' ]] 00:17:30.316 11:36:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:17:30.316 11:36:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:17:30.316 11:36:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:17:30.316 11:36:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:17:30.316 11:36:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:17:30.316 11:36:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:17:30.316 11:36:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:17:30.316 11:36:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:17:30.316 11:36:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:17:30.316 11:36:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:17:30.316 11:36:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:17:30.316 11:36:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:30.316 11:36:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:30.316 11:36:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:30.574 11:36:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:17:30.574 11:36:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:17:30.574 11:36:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:17:30.574 11:36:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:17:30.574 11:36:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:17:30.574 11:36:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:30.574 11:36:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:31.509 [2024-07-15 11:36:08.828514] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:17:31.509 [2024-07-15 11:36:08.828574] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:17:31.509 [2024-07-15 11:36:08.828597] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:17:31.509 [2024-07-15 11:36:08.916662] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:17:31.509 [2024-07-15 11:36:08.984171] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:17:31.509 [2024-07-15 11:36:08.984245] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:17:31.768 11:36:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:31.768 11:36:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:17:31.768 11:36:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:17:31.768 11:36:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:17:31.768 11:36:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:17:31.768 11:36:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:31.768 11:36:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:17:31.768 11:36:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:31.768 11:36:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:17:31.768 11:36:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:31.768 11:36:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:31.768 2024/07/15 11:36:09 error on JSON-RPC call, method: bdev_nvme_start_discovery, params: map[adrfam:ipv4 hostnqn:nqn.2021-12.io.spdk:test name:nvme traddr:10.0.0.2 trsvcid:8009 trtype:tcp wait_for_attach:%!s(bool=true)], err: error received for bdev_nvme_start_discovery method, err: Code=-17 Msg=File exists 00:17:31.768 request: 00:17:31.768 { 00:17:31.768 "method": "bdev_nvme_start_discovery", 00:17:31.768 "params": { 00:17:31.768 "name": "nvme", 00:17:31.768 "trtype": "tcp", 00:17:31.768 "traddr": "10.0.0.2", 00:17:31.768 "adrfam": "ipv4", 00:17:31.768 "trsvcid": "8009", 00:17:31.768 "hostnqn": "nqn.2021-12.io.spdk:test", 00:17:31.768 "wait_for_attach": true 00:17:31.768 } 00:17:31.768 } 00:17:31.768 Got JSON-RPC error response 00:17:31.768 GoRPCClient: error on JSON-RPC call 00:17:31.768 11:36:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:17:31.768 11:36:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:17:31.768 11:36:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:17:31.768 11:36:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:17:31.768 11:36:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:17:31.768 11:36:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:17:31.768 11:36:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:17:31.768 11:36:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:17:31.768 11:36:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:31.768 11:36:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:17:31.768 11:36:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:17:31.768 11:36:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:31.768 11:36:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:31.768 11:36:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:17:31.768 11:36:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:17:31.768 11:36:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:17:31.768 11:36:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:31.768 11:36:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:31.768 11:36:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:17:31.768 11:36:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:17:31.768 11:36:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:17:31.768 11:36:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:31.768 11:36:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:17:31.768 11:36:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:17:31.768 11:36:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:17:31.768 11:36:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:17:31.768 11:36:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:17:31.768 11:36:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:31.768 11:36:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:17:31.768 11:36:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:31.768 11:36:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:17:31.768 11:36:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:31.768 11:36:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:31.768 2024/07/15 11:36:09 error on JSON-RPC call, method: bdev_nvme_start_discovery, params: map[adrfam:ipv4 hostnqn:nqn.2021-12.io.spdk:test name:nvme_second traddr:10.0.0.2 trsvcid:8009 trtype:tcp wait_for_attach:%!s(bool=true)], err: error received for bdev_nvme_start_discovery method, err: Code=-17 Msg=File exists 00:17:31.768 request: 00:17:31.768 { 00:17:31.768 "method": "bdev_nvme_start_discovery", 00:17:31.768 "params": { 00:17:31.768 "name": "nvme_second", 00:17:31.768 "trtype": "tcp", 00:17:31.768 "traddr": "10.0.0.2", 00:17:31.768 "adrfam": "ipv4", 00:17:31.768 "trsvcid": "8009", 00:17:31.768 "hostnqn": "nqn.2021-12.io.spdk:test", 00:17:31.768 "wait_for_attach": true 00:17:31.768 } 00:17:31.768 } 00:17:31.768 Got JSON-RPC error response 00:17:31.768 GoRPCClient: error on JSON-RPC call 00:17:31.768 11:36:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:17:31.768 11:36:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:17:31.768 11:36:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:17:31.768 11:36:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:17:31.768 11:36:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:17:31.768 11:36:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:17:31.768 11:36:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:17:31.768 11:36:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:17:31.768 11:36:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:17:31.768 11:36:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:31.768 11:36:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:17:31.768 11:36:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:31.768 11:36:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:31.768 11:36:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:17:31.768 11:36:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:17:31.768 11:36:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:17:31.768 11:36:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:31.768 11:36:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:31.768 11:36:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:17:31.768 11:36:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:17:31.768 11:36:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:17:31.768 11:36:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:31.768 11:36:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:17:31.768 11:36:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:17:31.768 11:36:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:17:31.768 11:36:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:17:31.768 11:36:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:17:31.768 11:36:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:31.768 11:36:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:17:31.768 11:36:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:31.768 11:36:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:17:31.768 11:36:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:31.769 11:36:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:33.143 [2024-07-15 11:36:10.248638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:17:33.143 [2024-07-15 11:36:10.248724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad3f00 with addr=10.0.0.2, port=8010 00:17:33.143 [2024-07-15 11:36:10.248748] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:17:33.143 [2024-07-15 11:36:10.248761] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:17:33.143 [2024-07-15 11:36:10.248771] bdev_nvme.c:7045:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:17:34.076 [2024-07-15 11:36:11.248637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:17:34.076 [2024-07-15 11:36:11.248718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad3f00 with addr=10.0.0.2, port=8010 00:17:34.076 [2024-07-15 11:36:11.248742] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:17:34.076 [2024-07-15 11:36:11.248753] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:17:34.076 [2024-07-15 11:36:11.248764] bdev_nvme.c:7045:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:17:35.065 [2024-07-15 11:36:12.248452] bdev_nvme.c:7026:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:17:35.065 2024/07/15 11:36:12 error on JSON-RPC call, method: bdev_nvme_start_discovery, params: map[adrfam:ipv4 attach_timeout_ms:3000 hostnqn:nqn.2021-12.io.spdk:test name:nvme_second traddr:10.0.0.2 trsvcid:8010 trtype:tcp wait_for_attach:%!s(bool=false)], err: error received for bdev_nvme_start_discovery method, err: Code=-110 Msg=Connection timed out 00:17:35.065 request: 00:17:35.065 { 00:17:35.065 "method": "bdev_nvme_start_discovery", 00:17:35.065 "params": { 00:17:35.065 "name": "nvme_second", 00:17:35.065 "trtype": "tcp", 00:17:35.065 "traddr": "10.0.0.2", 00:17:35.065 "adrfam": "ipv4", 00:17:35.065 "trsvcid": "8010", 00:17:35.065 "hostnqn": "nqn.2021-12.io.spdk:test", 00:17:35.065 "wait_for_attach": false, 00:17:35.065 "attach_timeout_ms": 3000 00:17:35.065 } 00:17:35.065 } 00:17:35.065 Got JSON-RPC error response 00:17:35.065 GoRPCClient: error on JSON-RPC call 00:17:35.065 11:36:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:17:35.065 11:36:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:17:35.065 11:36:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:17:35.065 11:36:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:17:35.065 11:36:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:17:35.065 11:36:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:17:35.065 11:36:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:17:35.065 11:36:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:35.065 11:36:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:35.065 11:36:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:17:35.065 11:36:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:17:35.065 11:36:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:17:35.065 11:36:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:35.065 11:36:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:17:35.065 11:36:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:17:35.065 11:36:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 88713 00:17:35.065 11:36:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:17:35.065 11:36:12 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:35.065 11:36:12 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@117 -- # sync 00:17:35.065 11:36:12 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:35.065 11:36:12 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@120 -- # set +e 00:17:35.065 11:36:12 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:35.065 11:36:12 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:35.065 rmmod nvme_tcp 00:17:35.065 rmmod nvme_fabrics 00:17:35.065 rmmod nvme_keyring 00:17:35.065 11:36:12 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:35.065 11:36:12 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@124 -- # set -e 00:17:35.065 11:36:12 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@125 -- # return 0 00:17:35.065 11:36:12 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@489 -- # '[' -n 88663 ']' 00:17:35.065 11:36:12 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@490 -- # killprocess 88663 00:17:35.065 11:36:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@948 -- # '[' -z 88663 ']' 00:17:35.065 11:36:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@952 -- # kill -0 88663 00:17:35.065 11:36:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@953 -- # uname 00:17:35.065 11:36:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:35.065 11:36:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 88663 00:17:35.065 killing process with pid 88663 00:17:35.065 11:36:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:17:35.065 11:36:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:17:35.065 11:36:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@966 -- # echo 'killing process with pid 88663' 00:17:35.065 11:36:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@967 -- # kill 88663 00:17:35.065 11:36:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@972 -- # wait 88663 00:17:35.324 11:36:12 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:35.325 11:36:12 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:35.325 11:36:12 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:35.325 11:36:12 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:35.325 11:36:12 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:35.325 11:36:12 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:35.325 11:36:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:35.325 11:36:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:35.325 11:36:12 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:17:35.325 00:17:35.325 real 0m10.103s 00:17:35.325 user 0m20.077s 00:17:35.325 sys 0m1.441s 00:17:35.325 11:36:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:35.325 ************************************ 00:17:35.325 END TEST nvmf_host_discovery 00:17:35.325 ************************************ 00:17:35.325 11:36:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:35.325 11:36:12 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:17:35.325 11:36:12 nvmf_tcp -- nvmf/nvmf.sh@102 -- # run_test nvmf_host_multipath_status /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:17:35.325 11:36:12 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:17:35.325 11:36:12 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:35.325 11:36:12 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:35.325 ************************************ 00:17:35.325 START TEST nvmf_host_multipath_status 00:17:35.325 ************************************ 00:17:35.325 11:36:12 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:17:35.325 * Looking for test storage... 00:17:35.325 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:17:35.325 11:36:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:35.325 11:36:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:17:35.325 11:36:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:35.325 11:36:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:35.325 11:36:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:35.325 11:36:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:35.325 11:36:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:35.325 11:36:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:35.325 11:36:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:35.325 11:36:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:35.325 11:36:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:35.325 11:36:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:35.325 11:36:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:891080d4-f96c-4735-b9e2-e3ce9892e421 00:17:35.325 11:36:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=891080d4-f96c-4735-b9e2-e3ce9892e421 00:17:35.325 11:36:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:35.325 11:36:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:35.325 11:36:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:35.325 11:36:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:35.325 11:36:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:35.325 11:36:12 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:35.325 11:36:12 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:35.325 11:36:12 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:35.325 11:36:12 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:35.325 11:36:12 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:35.325 11:36:12 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:35.325 11:36:12 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:17:35.325 11:36:12 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:35.325 11:36:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@47 -- # : 0 00:17:35.325 11:36:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:35.325 11:36:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:35.325 11:36:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:35.325 11:36:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:35.325 11:36:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:35.325 11:36:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:35.325 11:36:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:35.325 11:36:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:35.325 11:36:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:17:35.325 11:36:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:17:35.325 11:36:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:35.325 11:36:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:17:35.325 11:36:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:35.325 11:36:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:17:35.325 11:36:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:17:35.325 11:36:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:35.325 11:36:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:35.325 11:36:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:35.325 11:36:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:35.325 11:36:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:35.325 11:36:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:35.325 11:36:12 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:35.325 11:36:12 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:35.583 11:36:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:17:35.583 11:36:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:17:35.583 11:36:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:17:35.583 11:36:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:17:35.583 11:36:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:17:35.583 11:36:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@432 -- # nvmf_veth_init 00:17:35.583 11:36:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:35.583 11:36:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:35.583 11:36:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:17:35.583 11:36:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:17:35.583 11:36:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:17:35.583 11:36:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:17:35.583 11:36:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:17:35.583 11:36:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:35.583 11:36:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:17:35.583 11:36:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:17:35.583 11:36:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:17:35.583 11:36:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:17:35.583 11:36:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:17:35.583 11:36:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:17:35.583 Cannot find device "nvmf_tgt_br" 00:17:35.583 11:36:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@155 -- # true 00:17:35.583 11:36:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:17:35.583 Cannot find device "nvmf_tgt_br2" 00:17:35.583 11:36:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@156 -- # true 00:17:35.583 11:36:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:17:35.583 11:36:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:17:35.583 Cannot find device "nvmf_tgt_br" 00:17:35.583 11:36:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@158 -- # true 00:17:35.583 11:36:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:17:35.583 Cannot find device "nvmf_tgt_br2" 00:17:35.583 11:36:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@159 -- # true 00:17:35.583 11:36:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:17:35.583 11:36:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:17:35.584 11:36:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:35.584 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:35.584 11:36:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@162 -- # true 00:17:35.584 11:36:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:35.584 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:35.584 11:36:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@163 -- # true 00:17:35.584 11:36:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:17:35.584 11:36:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:17:35.584 11:36:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:17:35.584 11:36:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:17:35.584 11:36:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:17:35.584 11:36:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:17:35.584 11:36:13 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:17:35.584 11:36:13 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:17:35.584 11:36:13 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:17:35.584 11:36:13 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:17:35.584 11:36:13 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:17:35.584 11:36:13 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:17:35.584 11:36:13 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:17:35.584 11:36:13 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:35.842 11:36:13 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:17:35.842 11:36:13 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:17:35.842 11:36:13 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:17:35.842 11:36:13 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:17:35.842 11:36:13 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:17:35.842 11:36:13 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:17:35.842 11:36:13 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:17:35.842 11:36:13 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:17:35.842 11:36:13 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:17:35.842 11:36:13 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:17:35.842 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:35.842 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.077 ms 00:17:35.842 00:17:35.842 --- 10.0.0.2 ping statistics --- 00:17:35.842 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:35.842 rtt min/avg/max/mdev = 0.077/0.077/0.077/0.000 ms 00:17:35.842 11:36:13 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:17:35.842 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:17:35.842 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.067 ms 00:17:35.842 00:17:35.842 --- 10.0.0.3 ping statistics --- 00:17:35.842 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:35.842 rtt min/avg/max/mdev = 0.067/0.067/0.067/0.000 ms 00:17:35.842 11:36:13 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:17:35.842 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:35.842 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.034 ms 00:17:35.842 00:17:35.842 --- 10.0.0.1 ping statistics --- 00:17:35.842 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:35.842 rtt min/avg/max/mdev = 0.034/0.034/0.034/0.000 ms 00:17:35.842 11:36:13 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:35.842 11:36:13 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@433 -- # return 0 00:17:35.842 11:36:13 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:35.842 11:36:13 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:35.842 11:36:13 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:35.842 11:36:13 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:35.842 11:36:13 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:35.842 11:36:13 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:35.842 11:36:13 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:35.842 11:36:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:17:35.842 11:36:13 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:35.842 11:36:13 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:35.842 11:36:13 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:17:35.842 11:36:13 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@481 -- # nvmfpid=89174 00:17:35.842 11:36:13 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:17:35.842 11:36:13 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # waitforlisten 89174 00:17:35.842 11:36:13 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@829 -- # '[' -z 89174 ']' 00:17:35.842 11:36:13 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:35.842 11:36:13 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:35.842 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:35.842 11:36:13 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:35.842 11:36:13 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:35.842 11:36:13 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:17:35.842 [2024-07-15 11:36:13.242410] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:17:35.842 [2024-07-15 11:36:13.242514] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:36.100 [2024-07-15 11:36:13.383626] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:17:36.100 [2024-07-15 11:36:13.457907] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:36.100 [2024-07-15 11:36:13.458003] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:36.100 [2024-07-15 11:36:13.458026] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:36.100 [2024-07-15 11:36:13.458043] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:36.100 [2024-07-15 11:36:13.458057] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:36.100 [2024-07-15 11:36:13.459588] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:36.100 [2024-07-15 11:36:13.459612] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:37.031 11:36:14 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:37.031 11:36:14 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@862 -- # return 0 00:17:37.031 11:36:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:37.031 11:36:14 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:37.031 11:36:14 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:17:37.031 11:36:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:37.031 11:36:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=89174 00:17:37.031 11:36:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:17:37.288 [2024-07-15 11:36:14.560890] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:37.288 11:36:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:17:37.545 Malloc0 00:17:37.545 11:36:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:17:37.803 11:36:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:38.060 11:36:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:38.317 [2024-07-15 11:36:15.690021] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:38.317 11:36:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:17:38.574 [2024-07-15 11:36:15.938238] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:17:38.574 11:36:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=89277 00:17:38.574 11:36:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:38.574 11:36:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 89277 /var/tmp/bdevperf.sock 00:17:38.574 11:36:15 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@829 -- # '[' -z 89277 ']' 00:17:38.574 11:36:15 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:38.574 11:36:15 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:38.574 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:38.574 11:36:15 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:38.574 11:36:15 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:38.574 11:36:15 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:17:38.574 11:36:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:17:38.831 11:36:16 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:38.831 11:36:16 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@862 -- # return 0 00:17:38.831 11:36:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:17:39.089 11:36:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -l -1 -o 10 00:17:39.654 Nvme0n1 00:17:39.654 11:36:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:17:39.911 Nvme0n1 00:17:40.171 11:36:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:17:40.171 11:36:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:17:42.119 11:36:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:17:42.119 11:36:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:17:42.378 11:36:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:17:42.636 11:36:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:17:43.579 11:36:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:17:43.579 11:36:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:17:43.838 11:36:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:43.838 11:36:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:17:44.096 11:36:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:44.096 11:36:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:17:44.096 11:36:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:44.096 11:36:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:17:44.354 11:36:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:17:44.354 11:36:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:17:44.354 11:36:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:44.354 11:36:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:17:44.611 11:36:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:44.611 11:36:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:17:44.611 11:36:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:44.611 11:36:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:17:44.870 11:36:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:44.870 11:36:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:17:44.870 11:36:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:44.870 11:36:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:17:45.127 11:36:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:45.127 11:36:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:17:45.127 11:36:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:45.127 11:36:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:17:45.385 11:36:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:45.385 11:36:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:17:45.385 11:36:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:17:45.644 11:36:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:17:45.902 11:36:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:17:47.274 11:36:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:17:47.274 11:36:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:17:47.274 11:36:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:47.274 11:36:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:17:47.274 11:36:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:17:47.274 11:36:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:17:47.274 11:36:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:47.274 11:36:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:17:47.533 11:36:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:47.533 11:36:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:17:47.533 11:36:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:47.533 11:36:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:17:47.791 11:36:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:47.791 11:36:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:17:47.791 11:36:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:47.791 11:36:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:17:48.049 11:36:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:48.049 11:36:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:17:48.049 11:36:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:48.049 11:36:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:17:48.307 11:36:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:48.307 11:36:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:17:48.307 11:36:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:48.307 11:36:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:17:48.872 11:36:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:48.872 11:36:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:17:48.872 11:36:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:17:49.129 11:36:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:17:49.386 11:36:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:17:50.758 11:36:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:17:50.758 11:36:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:17:50.758 11:36:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:50.758 11:36:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:17:50.758 11:36:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:50.758 11:36:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:17:50.758 11:36:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:50.758 11:36:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:17:51.015 11:36:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:17:51.015 11:36:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:17:51.015 11:36:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:51.015 11:36:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:17:51.273 11:36:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:51.273 11:36:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:17:51.273 11:36:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:51.273 11:36:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:17:51.838 11:36:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:51.838 11:36:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:17:51.838 11:36:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:51.838 11:36:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:17:52.096 11:36:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:52.096 11:36:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:17:52.096 11:36:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:52.096 11:36:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:17:52.353 11:36:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:52.353 11:36:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:17:52.353 11:36:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:17:52.611 11:36:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:17:52.868 11:36:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:17:53.800 11:36:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:17:53.800 11:36:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:17:53.800 11:36:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:53.800 11:36:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:17:54.365 11:36:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:54.365 11:36:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:17:54.365 11:36:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:54.365 11:36:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:17:54.632 11:36:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:17:54.632 11:36:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:17:54.632 11:36:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:54.632 11:36:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:17:54.890 11:36:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:54.890 11:36:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:17:54.890 11:36:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:54.890 11:36:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:17:55.456 11:36:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:55.456 11:36:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:17:55.456 11:36:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:55.456 11:36:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:17:55.715 11:36:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:55.715 11:36:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:17:55.715 11:36:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:55.715 11:36:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:17:55.972 11:36:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:17:55.972 11:36:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:17:55.972 11:36:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:17:56.229 11:36:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:17:56.486 11:36:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:17:57.910 11:36:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:17:57.910 11:36:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:17:57.910 11:36:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:57.910 11:36:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:17:57.910 11:36:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:17:57.910 11:36:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:17:57.910 11:36:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:57.910 11:36:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:17:58.475 11:36:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:17:58.475 11:36:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:17:58.475 11:36:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:58.475 11:36:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:17:58.732 11:36:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:58.732 11:36:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:17:58.732 11:36:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:58.732 11:36:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:17:58.990 11:36:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:58.990 11:36:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:17:58.990 11:36:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:58.990 11:36:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:17:59.248 11:36:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:17:59.248 11:36:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:17:59.248 11:36:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:59.248 11:36:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:17:59.505 11:36:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:17:59.505 11:36:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:17:59.505 11:36:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:17:59.762 11:36:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:18:00.020 11:36:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:18:00.956 11:36:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:18:00.956 11:36:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:18:00.956 11:36:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:00.956 11:36:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:18:01.222 11:36:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:18:01.222 11:36:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:18:01.222 11:36:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:01.222 11:36:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:18:01.480 11:36:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:01.480 11:36:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:18:01.480 11:36:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:01.480 11:36:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:18:02.045 11:36:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:02.045 11:36:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:18:02.045 11:36:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:18:02.045 11:36:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:02.302 11:36:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:02.302 11:36:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:18:02.302 11:36:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:02.302 11:36:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:18:02.560 11:36:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:18:02.561 11:36:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:18:02.561 11:36:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:02.561 11:36:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:18:02.818 11:36:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:02.818 11:36:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:18:03.385 11:36:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:18:03.385 11:36:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:18:03.643 11:36:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:18:03.900 11:36:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:18:05.275 11:36:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:18:05.275 11:36:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:18:05.275 11:36:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:05.275 11:36:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:18:05.275 11:36:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:05.275 11:36:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:18:05.275 11:36:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:05.275 11:36:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:18:05.533 11:36:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:05.533 11:36:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:18:05.533 11:36:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:05.533 11:36:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:18:06.100 11:36:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:06.100 11:36:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:18:06.100 11:36:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:06.100 11:36:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:18:06.357 11:36:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:06.357 11:36:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:18:06.357 11:36:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:18:06.357 11:36:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:06.615 11:36:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:06.615 11:36:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:18:06.615 11:36:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:06.615 11:36:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:18:06.874 11:36:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:06.874 11:36:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:18:06.874 11:36:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:18:07.134 11:36:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:18:07.392 11:36:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:18:08.326 11:36:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:18:08.326 11:36:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:18:08.326 11:36:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:08.326 11:36:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:18:08.892 11:36:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:18:08.892 11:36:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:18:08.892 11:36:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:08.892 11:36:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:18:08.892 11:36:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:08.892 11:36:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:18:08.892 11:36:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:18:08.892 11:36:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:09.150 11:36:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:09.150 11:36:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:18:09.150 11:36:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:09.150 11:36:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:18:09.408 11:36:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:09.408 11:36:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:18:09.408 11:36:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:09.408 11:36:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:18:09.974 11:36:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:09.974 11:36:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:18:09.974 11:36:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:09.974 11:36:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:18:10.232 11:36:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:10.232 11:36:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:18:10.232 11:36:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:18:10.490 11:36:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:18:10.750 11:36:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:18:11.684 11:36:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:18:11.684 11:36:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:18:11.684 11:36:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:18:11.684 11:36:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:11.943 11:36:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:11.943 11:36:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:18:11.943 11:36:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:11.943 11:36:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:18:12.201 11:36:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:12.201 11:36:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:18:12.201 11:36:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:12.201 11:36:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:18:12.766 11:36:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:12.766 11:36:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:18:12.766 11:36:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:12.766 11:36:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:18:12.766 11:36:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:12.766 11:36:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:18:12.766 11:36:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:12.766 11:36:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:18:13.023 11:36:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:13.023 11:36:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:18:13.023 11:36:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:13.023 11:36:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:18:13.586 11:36:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:13.586 11:36:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:18:13.586 11:36:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:18:13.586 11:36:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:18:13.842 11:36:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:18:15.211 11:36:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:18:15.211 11:36:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:18:15.211 11:36:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:15.211 11:36:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:18:15.211 11:36:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:15.211 11:36:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:18:15.211 11:36:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:15.211 11:36:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:18:15.468 11:36:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:18:15.468 11:36:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:18:15.468 11:36:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:15.468 11:36:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:18:15.726 11:36:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:15.726 11:36:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:18:15.726 11:36:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:15.726 11:36:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:18:16.290 11:36:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:16.290 11:36:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:18:16.290 11:36:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:16.290 11:36:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:18:16.548 11:36:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:16.548 11:36:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:18:16.548 11:36:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:16.548 11:36:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:18:16.806 11:36:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:18:16.806 11:36:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 89277 00:18:16.806 11:36:54 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@948 -- # '[' -z 89277 ']' 00:18:16.806 11:36:54 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # kill -0 89277 00:18:16.806 11:36:54 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # uname 00:18:16.806 11:36:54 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:16.806 11:36:54 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 89277 00:18:16.806 killing process with pid 89277 00:18:16.806 11:36:54 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:18:16.806 11:36:54 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:18:16.806 11:36:54 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@966 -- # echo 'killing process with pid 89277' 00:18:16.806 11:36:54 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@967 -- # kill 89277 00:18:16.806 11:36:54 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # wait 89277 00:18:16.806 Connection closed with partial response: 00:18:16.806 00:18:16.806 00:18:17.075 11:36:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 89277 00:18:17.075 11:36:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:18:17.075 [2024-07-15 11:36:16.014517] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:18:17.075 [2024-07-15 11:36:16.014660] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89277 ] 00:18:17.075 [2024-07-15 11:36:16.147117] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:17.075 [2024-07-15 11:36:16.207457] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:18:17.075 Running I/O for 90 seconds... 00:18:17.075 [2024-07-15 11:36:33.647986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:94992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.075 [2024-07-15 11:36:33.648084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:18:17.075 [2024-07-15 11:36:33.648153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:95000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.075 [2024-07-15 11:36:33.648174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:18:17.075 [2024-07-15 11:36:33.648197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:95008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.075 [2024-07-15 11:36:33.648212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:18:17.075 [2024-07-15 11:36:33.648233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:95016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.075 [2024-07-15 11:36:33.648248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:18:17.075 [2024-07-15 11:36:33.648270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:95024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.075 [2024-07-15 11:36:33.648284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:18:17.075 [2024-07-15 11:36:33.648305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:95032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.075 [2024-07-15 11:36:33.648319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:18:17.075 [2024-07-15 11:36:33.648340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:95040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.075 [2024-07-15 11:36:33.648354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:18:17.075 [2024-07-15 11:36:33.648384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:95048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.075 [2024-07-15 11:36:33.648398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:18:17.075 [2024-07-15 11:36:33.648420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:95056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.075 [2024-07-15 11:36:33.648433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:18:17.075 [2024-07-15 11:36:33.648455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:95064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.075 [2024-07-15 11:36:33.648469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:18:17.075 [2024-07-15 11:36:33.648489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:95072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.075 [2024-07-15 11:36:33.648526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:18:17.075 [2024-07-15 11:36:33.648565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:95080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.075 [2024-07-15 11:36:33.648583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:18:17.075 [2024-07-15 11:36:33.648607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:95088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.075 [2024-07-15 11:36:33.648633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:18:17.075 [2024-07-15 11:36:33.648662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:95096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.075 [2024-07-15 11:36:33.648677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:18:17.075 [2024-07-15 11:36:33.648699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:95104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.075 [2024-07-15 11:36:33.648713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:18:17.075 [2024-07-15 11:36:33.648734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:95112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.075 [2024-07-15 11:36:33.648748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:18:17.075 [2024-07-15 11:36:33.648771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:95120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.075 [2024-07-15 11:36:33.648786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:18:17.075 [2024-07-15 11:36:33.648991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:95128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.075 [2024-07-15 11:36:33.649018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:17.075 [2024-07-15 11:36:33.649046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:95136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.075 [2024-07-15 11:36:33.649062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:17.075 [2024-07-15 11:36:33.649086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:95144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.075 [2024-07-15 11:36:33.649101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:18:17.075 [2024-07-15 11:36:33.649124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:95152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.075 [2024-07-15 11:36:33.649139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:18:17.075 [2024-07-15 11:36:33.649162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:95160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.075 [2024-07-15 11:36:33.649177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:18:17.075 [2024-07-15 11:36:33.649201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:95168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.075 [2024-07-15 11:36:33.649215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:18:17.075 [2024-07-15 11:36:33.649253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:95176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.075 [2024-07-15 11:36:33.649270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:18:17.075 [2024-07-15 11:36:33.649293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:95184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.075 [2024-07-15 11:36:33.649308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:18:17.075 [2024-07-15 11:36:33.649330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:95192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.075 [2024-07-15 11:36:33.649345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:18:17.075 [2024-07-15 11:36:33.649384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:95200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.075 [2024-07-15 11:36:33.649421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:18:17.075 [2024-07-15 11:36:33.649459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:95208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.075 [2024-07-15 11:36:33.649478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:18:17.075 [2024-07-15 11:36:33.649502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:95216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.075 [2024-07-15 11:36:33.649517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:18:17.075 [2024-07-15 11:36:33.649539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:95224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.075 [2024-07-15 11:36:33.649570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:18:17.075 [2024-07-15 11:36:33.649596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:95232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.075 [2024-07-15 11:36:33.649625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:18:17.075 [2024-07-15 11:36:33.649670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:95240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.075 [2024-07-15 11:36:33.649688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:18:17.075 [2024-07-15 11:36:33.649713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:95248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.075 [2024-07-15 11:36:33.649729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:18:17.075 [2024-07-15 11:36:33.649753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:95256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.075 [2024-07-15 11:36:33.649768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:18:17.075 [2024-07-15 11:36:33.649791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:95264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.075 [2024-07-15 11:36:33.649821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:18:17.076 [2024-07-15 11:36:33.649875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:95272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.076 [2024-07-15 11:36:33.649894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:18:17.076 [2024-07-15 11:36:33.649918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:95280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.076 [2024-07-15 11:36:33.649933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:18:17.076 [2024-07-15 11:36:33.649956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:95288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.076 [2024-07-15 11:36:33.649971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:18:17.076 [2024-07-15 11:36:33.649994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:95296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.076 [2024-07-15 11:36:33.650008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:18:17.076 [2024-07-15 11:36:33.650031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:95304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.076 [2024-07-15 11:36:33.650045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:18:17.076 [2024-07-15 11:36:33.650068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:95312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.076 [2024-07-15 11:36:33.650083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:18:17.076 [2024-07-15 11:36:33.650106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:95320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.076 [2024-07-15 11:36:33.650121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:18:17.076 [2024-07-15 11:36:33.650155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:95328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.076 [2024-07-15 11:36:33.650169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:18:17.076 [2024-07-15 11:36:33.650192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:95336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.076 [2024-07-15 11:36:33.650206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:18:17.076 [2024-07-15 11:36:33.650229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:95344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.076 [2024-07-15 11:36:33.650244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:18:17.076 [2024-07-15 11:36:33.650268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:95352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.076 [2024-07-15 11:36:33.650283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:18:17.076 [2024-07-15 11:36:33.650306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:95360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.076 [2024-07-15 11:36:33.650321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:18:17.076 [2024-07-15 11:36:33.650344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:95368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.076 [2024-07-15 11:36:33.650367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:18:17.076 [2024-07-15 11:36:33.650392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:95376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.076 [2024-07-15 11:36:33.650407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:18:17.076 [2024-07-15 11:36:33.650431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:95384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.076 [2024-07-15 11:36:33.650446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:17.076 [2024-07-15 11:36:33.650469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:95392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.076 [2024-07-15 11:36:33.650484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:17.076 [2024-07-15 11:36:33.650507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:95400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.076 [2024-07-15 11:36:33.650521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:18:17.076 [2024-07-15 11:36:33.650544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:95408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.076 [2024-07-15 11:36:33.650576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:18:17.076 [2024-07-15 11:36:33.650601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:95416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.076 [2024-07-15 11:36:33.650627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:18:17.076 [2024-07-15 11:36:33.650658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:95424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.076 [2024-07-15 11:36:33.650673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:18:17.076 [2024-07-15 11:36:33.650697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:95432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.076 [2024-07-15 11:36:33.650712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:18:17.076 [2024-07-15 11:36:33.650735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:95440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.076 [2024-07-15 11:36:33.650750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:18:17.076 [2024-07-15 11:36:33.650772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:95448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.076 [2024-07-15 11:36:33.650787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:18:17.076 [2024-07-15 11:36:33.650810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:95456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.076 [2024-07-15 11:36:33.650824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:18:17.076 [2024-07-15 11:36:33.650847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:95464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.076 [2024-07-15 11:36:33.650875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:18:17.076 [2024-07-15 11:36:33.650900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:95472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.076 [2024-07-15 11:36:33.650915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:18:17.076 [2024-07-15 11:36:33.650939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:95480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.076 [2024-07-15 11:36:33.650954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:18:17.076 [2024-07-15 11:36:33.650978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:95488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.076 [2024-07-15 11:36:33.650992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:18:17.076 [2024-07-15 11:36:33.651015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:95496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.076 [2024-07-15 11:36:33.651030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:18:17.076 [2024-07-15 11:36:33.651053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:95504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.076 [2024-07-15 11:36:33.651069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:18:17.076 [2024-07-15 11:36:33.652240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:95512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.076 [2024-07-15 11:36:33.652271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:18:17.076 [2024-07-15 11:36:33.652305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:95520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.076 [2024-07-15 11:36:33.652322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:18:17.076 [2024-07-15 11:36:33.652352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:95528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.076 [2024-07-15 11:36:33.652367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:18:17.076 [2024-07-15 11:36:33.652396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:95536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.076 [2024-07-15 11:36:33.652411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:18:17.076 [2024-07-15 11:36:33.652441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:95544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.076 [2024-07-15 11:36:33.652456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:18:17.076 [2024-07-15 11:36:33.652485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:95552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.076 [2024-07-15 11:36:33.652500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:18:17.076 [2024-07-15 11:36:33.652529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:95560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.076 [2024-07-15 11:36:33.652558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:18:17.076 [2024-07-15 11:36:33.652606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:95568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.076 [2024-07-15 11:36:33.652637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:18:17.076 [2024-07-15 11:36:33.652670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:95576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.076 [2024-07-15 11:36:33.652686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:18:17.076 [2024-07-15 11:36:33.652715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:95584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.076 [2024-07-15 11:36:33.652730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:18:17.076 [2024-07-15 11:36:33.652759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:95592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.076 [2024-07-15 11:36:33.652774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:18:17.076 [2024-07-15 11:36:33.652803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:95600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.076 [2024-07-15 11:36:33.652817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:18:17.077 [2024-07-15 11:36:33.652849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:95608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.077 [2024-07-15 11:36:33.652865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:18:17.077 [2024-07-15 11:36:33.652894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:95616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.077 [2024-07-15 11:36:33.652909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:18:17.077 [2024-07-15 11:36:33.652937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:95624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.077 [2024-07-15 11:36:33.652953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:18:17.077 [2024-07-15 11:36:33.652982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:95632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.077 [2024-07-15 11:36:33.652998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.077 [2024-07-15 11:36:33.653027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:95640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.077 [2024-07-15 11:36:33.653042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:17.077 [2024-07-15 11:36:33.653071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:95648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.077 [2024-07-15 11:36:33.653086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:17.077 [2024-07-15 11:36:33.653115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:95656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.077 [2024-07-15 11:36:33.653130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:18:17.077 [2024-07-15 11:36:33.653169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:95664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.077 [2024-07-15 11:36:33.653185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:18:17.077 [2024-07-15 11:36:33.653214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:95672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.077 [2024-07-15 11:36:33.653229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:18:17.077 [2024-07-15 11:36:33.653258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:95680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.077 [2024-07-15 11:36:33.653273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:18:17.077 [2024-07-15 11:36:33.653302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:95688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.077 [2024-07-15 11:36:33.653317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:18:17.077 [2024-07-15 11:36:33.653345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:95696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.077 [2024-07-15 11:36:33.653361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:18:17.077 [2024-07-15 11:36:33.653390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:95704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.077 [2024-07-15 11:36:33.653404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:18:17.077 [2024-07-15 11:36:33.653434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.077 [2024-07-15 11:36:33.653449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:18:17.077 [2024-07-15 11:36:33.653478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:95720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.077 [2024-07-15 11:36:33.653493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:18:17.077 [2024-07-15 11:36:33.653522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:95728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.077 [2024-07-15 11:36:33.653537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:18:17.077 [2024-07-15 11:36:33.653580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:95736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.077 [2024-07-15 11:36:33.653597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:18:17.077 [2024-07-15 11:36:33.653637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:95744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.077 [2024-07-15 11:36:33.653657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:18:17.077 [2024-07-15 11:36:33.653687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:95752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.077 [2024-07-15 11:36:33.653702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:18:17.077 [2024-07-15 11:36:33.653731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:95760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.077 [2024-07-15 11:36:33.653763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:18:17.077 [2024-07-15 11:36:33.653794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:95768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.077 [2024-07-15 11:36:33.653826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:18:17.077 [2024-07-15 11:36:33.653870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:95776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.077 [2024-07-15 11:36:33.653890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:18:17.077 [2024-07-15 11:36:33.653920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:95784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.077 [2024-07-15 11:36:33.653936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:18:17.077 [2024-07-15 11:36:33.653965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:94984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.077 [2024-07-15 11:36:33.653980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:18:17.077 [2024-07-15 11:36:51.262614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:130592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.077 [2024-07-15 11:36:51.262704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:18:17.077 [2024-07-15 11:36:51.262758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:130608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.077 [2024-07-15 11:36:51.262802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:18:17.077 [2024-07-15 11:36:51.262833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:130624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.077 [2024-07-15 11:36:51.262849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:18:17.077 [2024-07-15 11:36:51.262871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:130640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.077 [2024-07-15 11:36:51.262885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:18:17.077 [2024-07-15 11:36:51.262908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:130656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.077 [2024-07-15 11:36:51.262935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:18:17.077 [2024-07-15 11:36:51.262966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:130672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.077 [2024-07-15 11:36:51.262995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:18:17.077 [2024-07-15 11:36:51.263027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:130688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.077 [2024-07-15 11:36:51.263043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:18:17.077 [2024-07-15 11:36:51.263065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:130704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.077 [2024-07-15 11:36:51.263114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:18:17.077 [2024-07-15 11:36:51.263150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:130720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.077 [2024-07-15 11:36:51.263179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:18:17.077 [2024-07-15 11:36:51.263215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:130736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.077 [2024-07-15 11:36:51.263232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:18:17.077 [2024-07-15 11:36:51.263254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:130752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.077 [2024-07-15 11:36:51.263276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:18:17.077 [2024-07-15 11:36:51.263308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:130768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.077 [2024-07-15 11:36:51.263335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:18:17.077 [2024-07-15 11:36:51.263371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:130784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.077 [2024-07-15 11:36:51.263390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:18:17.077 [2024-07-15 11:36:51.263412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:130800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.077 [2024-07-15 11:36:51.263427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:18:17.077 [2024-07-15 11:36:51.263448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:130816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.077 [2024-07-15 11:36:51.263462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:18:17.077 [2024-07-15 11:36:51.263483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:130832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.077 [2024-07-15 11:36:51.263507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:18:17.077 [2024-07-15 11:36:51.263538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:130848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.077 [2024-07-15 11:36:51.263589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:18:17.077 [2024-07-15 11:36:51.263620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:130864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.077 [2024-07-15 11:36:51.263637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:18:17.078 [2024-07-15 11:36:51.263659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:130880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.078 [2024-07-15 11:36:51.263678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:18:17.078 [2024-07-15 11:36:51.263710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:130896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.078 [2024-07-15 11:36:51.263736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:18:17.078 [2024-07-15 11:36:51.263795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:130912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.078 [2024-07-15 11:36:51.263825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:18:17.078 [2024-07-15 11:36:51.263863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:130928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.078 [2024-07-15 11:36:51.263892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:18:17.078 [2024-07-15 11:36:51.263918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:130944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.078 [2024-07-15 11:36:51.263933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:18:17.078 [2024-07-15 11:36:51.263955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:130960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.078 [2024-07-15 11:36:51.263970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:18:17.078 [2024-07-15 11:36:51.263991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:130976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.078 [2024-07-15 11:36:51.264006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:18:17.078 [2024-07-15 11:36:51.264039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:130312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.078 [2024-07-15 11:36:51.264069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:18:17.078 [2024-07-15 11:36:51.264107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:130984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.078 [2024-07-15 11:36:51.264126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:18:17.078 [2024-07-15 11:36:51.264151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:131000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.078 [2024-07-15 11:36:51.264175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:18:17.078 [2024-07-15 11:36:51.264206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:131016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.078 [2024-07-15 11:36:51.264230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:18:17.078 [2024-07-15 11:36:51.264254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:131032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.078 [2024-07-15 11:36:51.264269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:18:17.078 [2024-07-15 11:36:51.264291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:131048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.078 [2024-07-15 11:36:51.264310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:17.078 [2024-07-15 11:36:51.264343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:131064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.078 [2024-07-15 11:36:51.264370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:17.078 [2024-07-15 11:36:51.264423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:8 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.078 [2024-07-15 11:36:51.264452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:18:17.078 [2024-07-15 11:36:51.264486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:24 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.078 [2024-07-15 11:36:51.264516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:18:17.078 [2024-07-15 11:36:51.264558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:40 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.078 [2024-07-15 11:36:51.264587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:18:17.078 [2024-07-15 11:36:51.264619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:56 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.078 [2024-07-15 11:36:51.264648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:18:17.078 [2024-07-15 11:36:51.264684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:72 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.078 [2024-07-15 11:36:51.264701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:18:17.078 [2024-07-15 11:36:51.264722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:88 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.078 [2024-07-15 11:36:51.264739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:18:17.078 [2024-07-15 11:36:51.264763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.078 [2024-07-15 11:36:51.264788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:18:17.078 [2024-07-15 11:36:51.264817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.078 [2024-07-15 11:36:51.264847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:18:17.078 [2024-07-15 11:36:51.264880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.078 [2024-07-15 11:36:51.264897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:18:17.078 [2024-07-15 11:36:51.264919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.078 [2024-07-15 11:36:51.264934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:18:17.078 [2024-07-15 11:36:51.264962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:130368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.078 [2024-07-15 11:36:51.264983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:18:17.078 [2024-07-15 11:36:51.265020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:130400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.078 [2024-07-15 11:36:51.265049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:18:17.078 [2024-07-15 11:36:51.265075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:130432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.078 [2024-07-15 11:36:51.265100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:18:17.078 [2024-07-15 11:36:51.265123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:130464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.078 [2024-07-15 11:36:51.265138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:18:17.078 [2024-07-15 11:36:51.265168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:130496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.078 [2024-07-15 11:36:51.265190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:18:17.078 [2024-07-15 11:36:51.265227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:130528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.078 [2024-07-15 11:36:51.265257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:18:17.078 [2024-07-15 11:36:51.265289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:130560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.078 [2024-07-15 11:36:51.265306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:18:17.078 [2024-07-15 11:36:51.267133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.078 [2024-07-15 11:36:51.267172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:18:17.078 [2024-07-15 11:36:51.267204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.078 [2024-07-15 11:36:51.267221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:18:17.078 [2024-07-15 11:36:51.267243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.078 [2024-07-15 11:36:51.267259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:18:17.078 [2024-07-15 11:36:51.267280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.078 [2024-07-15 11:36:51.267294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:18:17.078 [2024-07-15 11:36:51.267315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.078 [2024-07-15 11:36:51.267330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:18:17.078 [2024-07-15 11:36:51.267351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.078 [2024-07-15 11:36:51.267365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:18:17.078 [2024-07-15 11:36:51.267386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:130360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.078 [2024-07-15 11:36:51.267400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:18:17.078 [2024-07-15 11:36:51.267421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:130392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.078 [2024-07-15 11:36:51.267451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:18:17.078 [2024-07-15 11:36:51.267474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:130424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.078 [2024-07-15 11:36:51.267489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:18:17.078 [2024-07-15 11:36:51.267510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:130456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.078 [2024-07-15 11:36:51.267524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:18:17.078 [2024-07-15 11:36:51.267560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:130488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.078 [2024-07-15 11:36:51.267577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:18:17.079 [2024-07-15 11:36:51.267599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:130520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.079 [2024-07-15 11:36:51.267614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:18:17.079 [2024-07-15 11:36:51.267635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:130552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.079 [2024-07-15 11:36:51.267651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.079 [2024-07-15 11:36:51.267673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:130584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.079 [2024-07-15 11:36:51.267688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:17.079 [2024-07-15 11:36:51.267709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.079 [2024-07-15 11:36:51.267723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:17.079 [2024-07-15 11:36:51.267745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:130616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.079 [2024-07-15 11:36:51.267759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:18:17.079 [2024-07-15 11:36:51.267781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:130648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.079 [2024-07-15 11:36:51.267795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:18:17.079 [2024-07-15 11:36:51.267816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:130680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.079 [2024-07-15 11:36:51.267831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:18:17.079 [2024-07-15 11:36:51.267852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:130712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.079 [2024-07-15 11:36:51.267866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:18:17.079 [2024-07-15 11:36:51.267887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:130744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.079 [2024-07-15 11:36:51.267901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:18:17.080 [2024-07-15 11:36:51.267933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:130776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.080 [2024-07-15 11:36:51.267949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:18:17.080 [2024-07-15 11:36:51.267970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:130808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.080 [2024-07-15 11:36:51.267984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:18:17.080 [2024-07-15 11:36:51.268006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:130840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.080 [2024-07-15 11:36:51.268020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:18:17.080 [2024-07-15 11:36:51.268041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:130872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.080 [2024-07-15 11:36:51.268056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:18:17.080 [2024-07-15 11:36:51.268077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:130904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.080 [2024-07-15 11:36:51.268091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:18:17.080 [2024-07-15 11:36:51.268112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:130936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.080 [2024-07-15 11:36:51.268126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:18:17.080 [2024-07-15 11:36:51.268148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:130968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.080 [2024-07-15 11:36:51.268162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:18:17.080 [2024-07-15 11:36:51.268183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:131008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.080 [2024-07-15 11:36:51.268205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:18:17.080 [2024-07-15 11:36:51.268227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:131040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.080 [2024-07-15 11:36:51.268242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:18:17.080 [2024-07-15 11:36:51.268263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:0 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.080 [2024-07-15 11:36:51.268277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:18:17.080 [2024-07-15 11:36:51.268298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.080 [2024-07-15 11:36:51.268312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:18:17.080 [2024-07-15 11:36:51.268334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:48 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.080 [2024-07-15 11:36:51.268348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:18:17.080 [2024-07-15 11:36:51.268376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:80 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.080 [2024-07-15 11:36:51.268391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:18:17.080 [2024-07-15 11:36:51.268412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.080 [2024-07-15 11:36:51.268427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:18:17.080 [2024-07-15 11:36:51.268448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.080 [2024-07-15 11:36:51.268463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:18:17.080 [2024-07-15 11:36:51.268484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:130608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.080 [2024-07-15 11:36:51.268498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:18:17.080 [2024-07-15 11:36:51.268519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:130640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.080 [2024-07-15 11:36:51.268534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:18:17.080 [2024-07-15 11:36:51.268567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:130672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.080 [2024-07-15 11:36:51.268583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:18:17.080 [2024-07-15 11:36:51.268605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:130704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.080 [2024-07-15 11:36:51.268620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:18:17.080 [2024-07-15 11:36:51.268641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:130736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.080 [2024-07-15 11:36:51.268655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:18:17.080 [2024-07-15 11:36:51.268676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:130768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.080 [2024-07-15 11:36:51.268691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:18:17.080 [2024-07-15 11:36:51.269335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:130800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.080 [2024-07-15 11:36:51.269364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:18:17.080 [2024-07-15 11:36:51.269391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:130832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.080 [2024-07-15 11:36:51.269408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:18:17.080 [2024-07-15 11:36:51.269429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:130864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.080 [2024-07-15 11:36:51.269446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:18:17.080 [2024-07-15 11:36:51.269469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:130896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.080 [2024-07-15 11:36:51.269496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:18:17.080 [2024-07-15 11:36:51.269519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:130928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.080 [2024-07-15 11:36:51.269534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:17.080 [2024-07-15 11:36:51.269571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:130960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.080 [2024-07-15 11:36:51.269589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:17.080 [2024-07-15 11:36:51.269610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:130312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.080 [2024-07-15 11:36:51.269624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:18:17.080 [2024-07-15 11:36:51.269645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:131000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.080 [2024-07-15 11:36:51.269659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:18:17.080 [2024-07-15 11:36:51.269681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:131032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.080 [2024-07-15 11:36:51.269695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:18:17.080 [2024-07-15 11:36:51.269716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:131064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.080 [2024-07-15 11:36:51.269730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:18:17.080 [2024-07-15 11:36:51.269751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:24 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.080 [2024-07-15 11:36:51.269773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:18:17.080 [2024-07-15 11:36:51.269794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:56 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.080 [2024-07-15 11:36:51.269809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:18:17.080 [2024-07-15 11:36:51.269849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:88 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.080 [2024-07-15 11:36:51.269866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:18:17.080 [2024-07-15 11:36:51.269888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.080 [2024-07-15 11:36:51.269903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:18:17.080 [2024-07-15 11:36:51.269924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.080 [2024-07-15 11:36:51.269938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:18:17.080 [2024-07-15 11:36:51.269959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:130400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.080 [2024-07-15 11:36:51.269983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:18:17.080 [2024-07-15 11:36:51.270005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:130464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.080 [2024-07-15 11:36:51.270020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:18:17.080 [2024-07-15 11:36:51.270042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:130528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.080 [2024-07-15 11:36:51.270056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:18:17.080 [2024-07-15 11:36:51.270433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.080 [2024-07-15 11:36:51.270461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:18:17.080 [2024-07-15 11:36:51.270487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.080 [2024-07-15 11:36:51.270503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:18:17.080 [2024-07-15 11:36:51.270525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.080 [2024-07-15 11:36:51.270539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:18:17.081 [2024-07-15 11:36:51.270580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.081 [2024-07-15 11:36:51.270597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:18:17.081 [2024-07-15 11:36:51.270618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.081 [2024-07-15 11:36:51.270632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:18:17.081 [2024-07-15 11:36:51.270654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.081 [2024-07-15 11:36:51.270668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:18:17.081 [2024-07-15 11:36:51.270689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.081 [2024-07-15 11:36:51.270704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:18:17.081 [2024-07-15 11:36:51.270725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.081 [2024-07-15 11:36:51.270739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:18:17.081 [2024-07-15 11:36:51.270760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.081 [2024-07-15 11:36:51.270775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:18:17.081 [2024-07-15 11:36:51.270796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.081 [2024-07-15 11:36:51.270811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:18:17.081 [2024-07-15 11:36:51.271501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.081 [2024-07-15 11:36:51.271528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:18:17.081 [2024-07-15 11:36:51.271570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:130392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.081 [2024-07-15 11:36:51.271589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:18:17.081 [2024-07-15 11:36:51.271611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:130456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.081 [2024-07-15 11:36:51.271626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:18:17.081 [2024-07-15 11:36:51.271647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:130520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.081 [2024-07-15 11:36:51.271662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:18:17.081 [2024-07-15 11:36:51.271684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:130584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.081 [2024-07-15 11:36:51.271698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:18:17.081 [2024-07-15 11:36:51.271719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:130616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.081 [2024-07-15 11:36:51.271733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:18:17.081 [2024-07-15 11:36:51.271754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:130680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.081 [2024-07-15 11:36:51.271769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:18:17.081 [2024-07-15 11:36:51.271790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:130744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.081 [2024-07-15 11:36:51.271804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:18:17.081 [2024-07-15 11:36:51.271825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:130808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.081 [2024-07-15 11:36:51.271839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:17.081 [2024-07-15 11:36:51.271860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:130872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.081 [2024-07-15 11:36:51.271874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:17.081 [2024-07-15 11:36:51.271895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:130936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.081 [2024-07-15 11:36:51.271909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:18:17.081 [2024-07-15 11:36:51.271930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:131008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.081 [2024-07-15 11:36:51.271945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:18:17.081 [2024-07-15 11:36:51.271978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:0 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.081 [2024-07-15 11:36:51.271994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:18:17.081 [2024-07-15 11:36:51.272015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:48 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.081 [2024-07-15 11:36:51.272030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:18:17.081 [2024-07-15 11:36:51.272051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.081 [2024-07-15 11:36:51.272065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:18:17.081 [2024-07-15 11:36:51.272086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:130608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.081 [2024-07-15 11:36:51.272101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:18:17.081 [2024-07-15 11:36:51.272122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:130672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.081 [2024-07-15 11:36:51.272136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:18:17.081 [2024-07-15 11:36:51.272158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:130736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.081 [2024-07-15 11:36:51.272172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:18:17.081 [2024-07-15 11:36:51.272193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.081 [2024-07-15 11:36:51.272207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:18:17.081 [2024-07-15 11:36:51.272228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:130832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.081 [2024-07-15 11:36:51.272242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:18:17.081 [2024-07-15 11:36:51.272263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:130896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.081 [2024-07-15 11:36:51.272277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:18:17.081 [2024-07-15 11:36:51.272298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:130960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.081 [2024-07-15 11:36:51.272312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:18:17.081 [2024-07-15 11:36:51.272333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:131000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.081 [2024-07-15 11:36:51.272348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:18:17.081 [2024-07-15 11:36:51.272369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:131064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.081 [2024-07-15 11:36:51.272383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:18:17.081 [2024-07-15 11:36:51.272404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:56 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.081 [2024-07-15 11:36:51.272432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:18:17.081 [2024-07-15 11:36:51.272454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.081 [2024-07-15 11:36:51.272468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:18:17.081 [2024-07-15 11:36:51.272490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:130400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.081 [2024-07-15 11:36:51.272503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:18:17.081 [2024-07-15 11:36:51.272524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:130528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.081 [2024-07-15 11:36:51.272539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:18:17.081 [2024-07-15 11:36:51.272574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.081 [2024-07-15 11:36:51.272589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:18:17.081 [2024-07-15 11:36:51.272610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.081 [2024-07-15 11:36:51.272625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:18:17.081 [2024-07-15 11:36:51.272646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.081 [2024-07-15 11:36:51.272660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:18:17.081 [2024-07-15 11:36:51.272681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.081 [2024-07-15 11:36:51.272696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:18:17.081 [2024-07-15 11:36:51.272717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.081 [2024-07-15 11:36:51.272732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:18:17.081 [2024-07-15 11:36:51.275495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:130624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.081 [2024-07-15 11:36:51.275531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:18:17.081 [2024-07-15 11:36:51.275579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:130688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.081 [2024-07-15 11:36:51.275597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:18:17.082 [2024-07-15 11:36:51.275620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:130752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.082 [2024-07-15 11:36:51.275635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:18:17.082 [2024-07-15 11:36:51.275656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:130816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.082 [2024-07-15 11:36:51.275686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:18:17.082 [2024-07-15 11:36:51.275709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:130880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.082 [2024-07-15 11:36:51.275725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:18:17.082 [2024-07-15 11:36:51.275746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:130944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.082 [2024-07-15 11:36:51.275761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:18:17.082 [2024-07-15 11:36:51.275782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:130984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.082 [2024-07-15 11:36:51.275797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:18:17.082 [2024-07-15 11:36:51.275818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:131048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.082 [2024-07-15 11:36:51.275832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:17.082 [2024-07-15 11:36:51.275853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:40 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.082 [2024-07-15 11:36:51.275868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:17.082 [2024-07-15 11:36:51.275889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.082 [2024-07-15 11:36:51.275903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:18:17.082 [2024-07-15 11:36:51.275926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:130392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.082 [2024-07-15 11:36:51.275945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:18:17.082 [2024-07-15 11:36:51.275967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:130520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.082 [2024-07-15 11:36:51.275981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:18:17.082 [2024-07-15 11:36:51.276002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:130616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.082 [2024-07-15 11:36:51.276016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:18:17.082 [2024-07-15 11:36:51.276038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:130744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.082 [2024-07-15 11:36:51.276052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:18:17.082 [2024-07-15 11:36:51.276073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:130872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.082 [2024-07-15 11:36:51.276087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:18:17.082 [2024-07-15 11:36:51.276108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:131008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.082 [2024-07-15 11:36:51.276122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:18:17.082 [2024-07-15 11:36:51.276153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:48 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.082 [2024-07-15 11:36:51.276169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:18:17.082 [2024-07-15 11:36:51.276190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:130608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.082 [2024-07-15 11:36:51.276205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:18:17.082 [2024-07-15 11:36:51.276226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:130736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.082 [2024-07-15 11:36:51.276240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:18:17.082 [2024-07-15 11:36:51.276261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:130832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.082 [2024-07-15 11:36:51.276276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:18:17.082 [2024-07-15 11:36:51.276297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:130960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.082 [2024-07-15 11:36:51.276311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:18:17.082 [2024-07-15 11:36:51.276332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:131064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.082 [2024-07-15 11:36:51.276347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:18:17.082 [2024-07-15 11:36:51.276368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.082 [2024-07-15 11:36:51.276382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:18:17.082 [2024-07-15 11:36:51.276403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:130528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.082 [2024-07-15 11:36:51.276417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:18:17.082 [2024-07-15 11:36:51.276438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.082 [2024-07-15 11:36:51.276452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:18:17.082 [2024-07-15 11:36:51.276474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.082 [2024-07-15 11:36:51.276488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:18:17.082 [2024-07-15 11:36:51.277126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.082 [2024-07-15 11:36:51.277158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:18:17.082 [2024-07-15 11:36:51.277186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.082 [2024-07-15 11:36:51.277202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:18:17.082 [2024-07-15 11:36:51.277236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.082 [2024-07-15 11:36:51.277252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:18:17.082 [2024-07-15 11:36:51.277273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.082 [2024-07-15 11:36:51.277288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:18:17.082 [2024-07-15 11:36:51.277309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.082 [2024-07-15 11:36:51.277323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:18:17.082 [2024-07-15 11:36:51.277344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.082 [2024-07-15 11:36:51.277359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:18:17.082 [2024-07-15 11:36:51.277380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.082 [2024-07-15 11:36:51.277394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:18:17.082 [2024-07-15 11:36:51.277415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.082 [2024-07-15 11:36:51.277429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:18:17.082 [2024-07-15 11:36:51.277450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.082 [2024-07-15 11:36:51.277464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:18:17.082 [2024-07-15 11:36:51.277485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.082 [2024-07-15 11:36:51.277499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:18:17.082 [2024-07-15 11:36:51.277520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.082 [2024-07-15 11:36:51.277536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:18:17.082 [2024-07-15 11:36:51.277574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.082 [2024-07-15 11:36:51.277591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:18:17.082 [2024-07-15 11:36:51.277612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.082 [2024-07-15 11:36:51.277626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.082 [2024-07-15 11:36:51.277647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.082 [2024-07-15 11:36:51.277661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:17.082 [2024-07-15 11:36:51.277682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.082 [2024-07-15 11:36:51.277740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:17.082 [2024-07-15 11:36:51.277765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.082 [2024-07-15 11:36:51.277780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:18:17.082 [2024-07-15 11:36:51.277801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.082 [2024-07-15 11:36:51.277830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:18:17.082 [2024-07-15 11:36:51.277855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.082 [2024-07-15 11:36:51.277870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:18:17.083 [2024-07-15 11:36:51.277892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.083 [2024-07-15 11:36:51.277906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:18:17.083 [2024-07-15 11:36:51.277927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.083 [2024-07-15 11:36:51.277941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:18:17.083 [2024-07-15 11:36:51.277962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.083 [2024-07-15 11:36:51.277977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:18:17.083 [2024-07-15 11:36:51.278418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:130640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.083 [2024-07-15 11:36:51.278445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:18:17.083 [2024-07-15 11:36:51.278471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:130768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.083 [2024-07-15 11:36:51.278487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:18:17.083 [2024-07-15 11:36:51.278509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.083 [2024-07-15 11:36:51.278523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:18:17.083 [2024-07-15 11:36:51.278559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:130864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.083 [2024-07-15 11:36:51.278577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:18:17.083 [2024-07-15 11:36:51.278600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:130688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.083 [2024-07-15 11:36:51.278615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:18:17.083 [2024-07-15 11:36:51.278636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:130816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.083 [2024-07-15 11:36:51.278664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:18:17.083 [2024-07-15 11:36:51.278688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:130944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.083 [2024-07-15 11:36:51.278703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:18:17.083 [2024-07-15 11:36:51.278724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:131048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.083 [2024-07-15 11:36:51.278739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:18:17.083 [2024-07-15 11:36:51.278760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.083 [2024-07-15 11:36:51.278774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:18:17.083 [2024-07-15 11:36:51.278795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:130520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.083 [2024-07-15 11:36:51.278809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:18:17.083 [2024-07-15 11:36:51.278830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:130744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.083 [2024-07-15 11:36:51.278845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:18:17.083 [2024-07-15 11:36:51.278866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:131008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.083 [2024-07-15 11:36:51.278881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:18:17.083 [2024-07-15 11:36:51.278903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:130608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.083 [2024-07-15 11:36:51.278917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:18:17.083 [2024-07-15 11:36:51.278938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:130832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.083 [2024-07-15 11:36:51.278953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:18:17.083 [2024-07-15 11:36:51.278974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:131064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.083 [2024-07-15 11:36:51.278988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:18:17.083 [2024-07-15 11:36:51.279009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:130528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.083 [2024-07-15 11:36:51.279023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:18:17.083 [2024-07-15 11:36:51.279045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.083 [2024-07-15 11:36:51.279060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:18:17.083 [2024-07-15 11:36:51.280590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:24 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.083 [2024-07-15 11:36:51.280623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:18:17.083 [2024-07-15 11:36:51.280665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.083 [2024-07-15 11:36:51.280683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:18:17.083 [2024-07-15 11:36:51.280705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.083 [2024-07-15 11:36:51.280719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:18:17.083 [2024-07-15 11:36:51.280740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.083 [2024-07-15 11:36:51.280755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:18:17.083 [2024-07-15 11:36:51.280776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.083 [2024-07-15 11:36:51.280792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:18:17.083 [2024-07-15 11:36:51.280813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.083 [2024-07-15 11:36:51.280828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:18:17.083 [2024-07-15 11:36:51.280849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.083 [2024-07-15 11:36:51.280863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:18:17.083 [2024-07-15 11:36:51.280885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.083 [2024-07-15 11:36:51.280899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:17.083 [2024-07-15 11:36:51.280921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.083 [2024-07-15 11:36:51.280935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:17.083 [2024-07-15 11:36:51.280956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.083 [2024-07-15 11:36:51.280971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:18:17.083 [2024-07-15 11:36:51.280992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.083 [2024-07-15 11:36:51.281006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:18:17.083 [2024-07-15 11:36:51.281028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.083 [2024-07-15 11:36:51.281042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:18:17.083 [2024-07-15 11:36:51.281064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.083 [2024-07-15 11:36:51.281078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:18:17.083 [2024-07-15 11:36:51.281107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.083 [2024-07-15 11:36:51.281123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:18:17.083 [2024-07-15 11:36:51.281145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.083 [2024-07-15 11:36:51.281159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:18:17.083 [2024-07-15 11:36:51.281181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:130768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.083 [2024-07-15 11:36:51.281195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:18:17.083 [2024-07-15 11:36:51.281216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:130864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.083 [2024-07-15 11:36:51.281231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:18:17.083 [2024-07-15 11:36:51.281252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:130816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.083 [2024-07-15 11:36:51.281266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:18:17.083 [2024-07-15 11:36:51.281287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:131048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.083 [2024-07-15 11:36:51.281301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:18:17.084 [2024-07-15 11:36:51.281322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:130520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.084 [2024-07-15 11:36:51.281337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:18:17.084 [2024-07-15 11:36:51.281359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:131008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.084 [2024-07-15 11:36:51.281373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:18:17.084 [2024-07-15 11:36:51.281395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:130832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.084 [2024-07-15 11:36:51.281409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:18:17.084 [2024-07-15 11:36:51.281431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:130528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.084 [2024-07-15 11:36:51.281446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:18:17.084 [2024-07-15 11:36:51.283939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:130672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.084 [2024-07-15 11:36:51.283973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:18:17.084 [2024-07-15 11:36:51.284001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:131000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.084 [2024-07-15 11:36:51.284018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:18:17.084 [2024-07-15 11:36:51.284040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.084 [2024-07-15 11:36:51.284068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:18:17.084 [2024-07-15 11:36:51.284092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.084 [2024-07-15 11:36:51.284107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:18:17.084 [2024-07-15 11:36:51.284129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.084 [2024-07-15 11:36:51.284144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:18:17.084 [2024-07-15 11:36:51.284165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.084 [2024-07-15 11:36:51.284179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:18:17.084 [2024-07-15 11:36:51.284200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.084 [2024-07-15 11:36:51.284214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:18:17.084 [2024-07-15 11:36:51.284235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.084 [2024-07-15 11:36:51.284249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:18:17.084 [2024-07-15 11:36:51.284270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.084 [2024-07-15 11:36:51.284284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:18:17.084 [2024-07-15 11:36:51.284305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.084 [2024-07-15 11:36:51.284319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:18:17.084 [2024-07-15 11:36:51.284340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.084 [2024-07-15 11:36:51.284354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:18:17.084 [2024-07-15 11:36:51.284375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.084 [2024-07-15 11:36:51.284390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:18:17.084 [2024-07-15 11:36:51.284411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.084 [2024-07-15 11:36:51.284426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:18:17.084 [2024-07-15 11:36:51.284447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.084 [2024-07-15 11:36:51.284461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:18:17.084 [2024-07-15 11:36:51.284482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.084 [2024-07-15 11:36:51.284504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:18:17.084 [2024-07-15 11:36:51.284527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.084 [2024-07-15 11:36:51.284541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:18:17.084 [2024-07-15 11:36:51.284579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.084 [2024-07-15 11:36:51.284594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:17.084 [2024-07-15 11:36:51.284615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.084 [2024-07-15 11:36:51.284630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:17.084 [2024-07-15 11:36:51.284651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.084 [2024-07-15 11:36:51.284666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:18:17.084 [2024-07-15 11:36:51.284686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.084 [2024-07-15 11:36:51.284701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:18:17.084 [2024-07-15 11:36:51.284722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.084 [2024-07-15 11:36:51.284736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:18:17.084 [2024-07-15 11:36:51.284757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.084 [2024-07-15 11:36:51.284771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:18:17.084 [2024-07-15 11:36:51.284792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.084 [2024-07-15 11:36:51.284806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:18:17.084 [2024-07-15 11:36:51.284827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.084 [2024-07-15 11:36:51.284841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:18:17.084 [2024-07-15 11:36:51.284862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.084 [2024-07-15 11:36:51.284876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:18:17.084 [2024-07-15 11:36:51.284897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.084 [2024-07-15 11:36:51.284911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:18:17.084 [2024-07-15 11:36:51.284932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:130768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.084 [2024-07-15 11:36:51.284946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:18:17.084 [2024-07-15 11:36:51.284976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:130816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.084 [2024-07-15 11:36:51.284995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:18:17.084 [2024-07-15 11:36:51.285018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:130520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.084 [2024-07-15 11:36:51.285032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:18:17.084 [2024-07-15 11:36:51.285053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:130832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.084 [2024-07-15 11:36:51.285067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:18:17.084 [2024-07-15 11:36:51.285089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.084 [2024-07-15 11:36:51.285103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:18:17.084 [2024-07-15 11:36:51.285745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:130960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.084 [2024-07-15 11:36:51.285772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:18:17.084 [2024-07-15 11:36:51.285798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.084 [2024-07-15 11:36:51.285814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:18:17.084 [2024-07-15 11:36:51.285850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.084 [2024-07-15 11:36:51.285866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:18:17.084 [2024-07-15 11:36:51.285887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.084 [2024-07-15 11:36:51.285902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:18:17.085 [2024-07-15 11:36:51.285922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.085 [2024-07-15 11:36:51.285937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:18:17.085 [2024-07-15 11:36:51.285958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.085 [2024-07-15 11:36:51.285972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:18:17.085 [2024-07-15 11:36:51.285993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.085 [2024-07-15 11:36:51.286007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:18:17.085 [2024-07-15 11:36:51.286028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.085 [2024-07-15 11:36:51.286042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:18:17.085 [2024-07-15 11:36:51.286063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.085 [2024-07-15 11:36:51.286089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:18:17.085 [2024-07-15 11:36:51.286112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.085 [2024-07-15 11:36:51.286127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:18:17.085 [2024-07-15 11:36:51.286148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.085 [2024-07-15 11:36:51.286166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:18:17.085 [2024-07-15 11:36:51.286188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.085 [2024-07-15 11:36:51.286202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:18:17.085 [2024-07-15 11:36:51.286223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.085 [2024-07-15 11:36:51.286241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:18:17.085 [2024-07-15 11:36:51.286263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.085 [2024-07-15 11:36:51.286277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:18:17.085 [2024-07-15 11:36:51.286299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.085 [2024-07-15 11:36:51.286313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:18:17.085 [2024-07-15 11:36:51.286334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.085 [2024-07-15 11:36:51.286348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:18:17.085 [2024-07-15 11:36:51.286369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.085 [2024-07-15 11:36:51.286383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:18:17.085 [2024-07-15 11:36:51.286405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.085 [2024-07-15 11:36:51.286419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:17.085 [2024-07-15 11:36:51.301040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.085 [2024-07-15 11:36:51.301093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:17.085 [2024-07-15 11:36:51.301126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:131064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.085 [2024-07-15 11:36:51.301144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:18:17.085 [2024-07-15 11:36:51.301166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:131000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.085 [2024-07-15 11:36:51.301203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:18:17.085 [2024-07-15 11:36:51.301228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.085 [2024-07-15 11:36:51.301243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:18:17.085 [2024-07-15 11:36:51.301265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.085 [2024-07-15 11:36:51.301279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:18:17.085 [2024-07-15 11:36:51.301300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.085 [2024-07-15 11:36:51.301314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:18:17.085 [2024-07-15 11:36:51.301336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.085 [2024-07-15 11:36:51.301350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:18:17.085 [2024-07-15 11:36:51.301371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.085 [2024-07-15 11:36:51.301386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:18:17.085 [2024-07-15 11:36:51.301407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.085 [2024-07-15 11:36:51.301424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:18:17.085 [2024-07-15 11:36:51.301445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.085 [2024-07-15 11:36:51.301460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:18:17.085 [2024-07-15 11:36:51.301481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.085 [2024-07-15 11:36:51.301497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:18:17.085 [2024-07-15 11:36:51.301518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.085 [2024-07-15 11:36:51.301533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:18:17.085 [2024-07-15 11:36:51.301554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.085 [2024-07-15 11:36:51.301588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:18:17.085 [2024-07-15 11:36:51.301611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.085 [2024-07-15 11:36:51.301626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:18:17.085 [2024-07-15 11:36:51.301647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.085 [2024-07-15 11:36:51.301662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:18:17.085 [2024-07-15 11:36:51.301693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:130816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.085 [2024-07-15 11:36:51.301709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:18:17.085 [2024-07-15 11:36:51.301732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:130832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.085 [2024-07-15 11:36:51.301747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:18:17.085 [2024-07-15 11:36:51.304610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.085 [2024-07-15 11:36:51.304644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:18:17.085 [2024-07-15 11:36:51.304673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.085 [2024-07-15 11:36:51.304690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:18:17.085 [2024-07-15 11:36:51.304713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.085 [2024-07-15 11:36:51.304728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:18:17.085 [2024-07-15 11:36:51.304749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.085 [2024-07-15 11:36:51.304763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:18:17.085 [2024-07-15 11:36:51.304785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.085 [2024-07-15 11:36:51.304800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:18:17.085 [2024-07-15 11:36:51.304821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.085 [2024-07-15 11:36:51.304836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:18:17.085 [2024-07-15 11:36:51.304857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.085 [2024-07-15 11:36:51.304872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:18:17.085 [2024-07-15 11:36:51.304894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.085 [2024-07-15 11:36:51.304909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:18:17.085 [2024-07-15 11:36:51.304930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.085 [2024-07-15 11:36:51.304945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:18:17.085 [2024-07-15 11:36:51.304966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.085 [2024-07-15 11:36:51.304981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:18:17.085 [2024-07-15 11:36:51.305016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.085 [2024-07-15 11:36:51.305032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:18:17.085 [2024-07-15 11:36:51.305054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.086 [2024-07-15 11:36:51.305068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:18:17.086 [2024-07-15 11:36:51.305090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.086 [2024-07-15 11:36:51.305104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:18:17.086 [2024-07-15 11:36:51.305126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.086 [2024-07-15 11:36:51.305140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.086 [2024-07-15 11:36:51.305161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.086 [2024-07-15 11:36:51.305176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:17.086 [2024-07-15 11:36:51.305197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:131064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.086 [2024-07-15 11:36:51.305211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:17.086 [2024-07-15 11:36:51.305232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.086 [2024-07-15 11:36:51.305246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:18:17.086 [2024-07-15 11:36:51.305271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.086 [2024-07-15 11:36:51.305286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:18:17.086 [2024-07-15 11:36:51.305307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.086 [2024-07-15 11:36:51.305321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:18:17.086 [2024-07-15 11:36:51.305342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.086 [2024-07-15 11:36:51.305357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:18:17.086 [2024-07-15 11:36:51.305378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.086 [2024-07-15 11:36:51.305392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:18:17.086 [2024-07-15 11:36:51.305413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.086 [2024-07-15 11:36:51.305428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:18:17.086 [2024-07-15 11:36:51.305449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:130816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.086 [2024-07-15 11:36:51.305477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:18:17.086 [2024-07-15 11:36:51.306880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.086 [2024-07-15 11:36:51.306917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:18:17.086 [2024-07-15 11:36:51.306951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.086 [2024-07-15 11:36:51.306970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:18:17.086 [2024-07-15 11:36:51.306996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.086 [2024-07-15 11:36:51.307014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:18:17.086 [2024-07-15 11:36:51.307040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:1000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.086 [2024-07-15 11:36:51.307057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:18:17.086 [2024-07-15 11:36:51.307082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:1016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.086 [2024-07-15 11:36:51.307100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:18:17.086 [2024-07-15 11:36:51.307125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:1032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.086 [2024-07-15 11:36:51.307142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:18:17.086 [2024-07-15 11:36:51.307168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:1048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.086 [2024-07-15 11:36:51.307185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:18:17.086 [2024-07-15 11:36:51.307211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:1064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.086 [2024-07-15 11:36:51.307228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:18:17.086 [2024-07-15 11:36:51.307254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:1080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.086 [2024-07-15 11:36:51.307271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:18:17.086 [2024-07-15 11:36:51.307297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:1096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.086 [2024-07-15 11:36:51.307314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:18:17.086 [2024-07-15 11:36:51.307339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:1112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.086 [2024-07-15 11:36:51.307357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:18:17.086 [2024-07-15 11:36:51.307382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:1128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.086 [2024-07-15 11:36:51.307399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:18:17.086 [2024-07-15 11:36:51.307441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:1144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.086 [2024-07-15 11:36:51.307460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:18:17.086 [2024-07-15 11:36:51.307486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:1160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.086 [2024-07-15 11:36:51.307503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:18:17.086 [2024-07-15 11:36:51.307528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:1176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.086 [2024-07-15 11:36:51.307563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:18:17.086 [2024-07-15 11:36:51.307594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.086 [2024-07-15 11:36:51.307612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:18:17.086 [2024-07-15 11:36:51.307638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.086 [2024-07-15 11:36:51.307656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:18:17.086 [2024-07-15 11:36:51.307681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.086 [2024-07-15 11:36:51.307699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:18:17.086 [2024-07-15 11:36:51.307724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.086 [2024-07-15 11:36:51.307741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:18:17.086 [2024-07-15 11:36:51.307767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.086 [2024-07-15 11:36:51.307784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:18:17.086 [2024-07-15 11:36:51.307810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.086 [2024-07-15 11:36:51.307827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:18:17.086 [2024-07-15 11:36:51.310134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:1184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.086 [2024-07-15 11:36:51.310169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:18:17.086 [2024-07-15 11:36:51.310202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:1200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.086 [2024-07-15 11:36:51.310221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:18:17.086 [2024-07-15 11:36:51.310248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:1216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.086 [2024-07-15 11:36:51.310266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:17.086 [2024-07-15 11:36:51.310306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.086 [2024-07-15 11:36:51.310325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:17.086 [2024-07-15 11:36:51.310351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:1248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.086 [2024-07-15 11:36:51.310369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:18:17.086 [2024-07-15 11:36:51.310395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:1264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.086 [2024-07-15 11:36:51.310411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:18:17.086 [2024-07-15 11:36:51.310437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:1280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.086 [2024-07-15 11:36:51.310454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:18:17.086 [2024-07-15 11:36:51.310480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:1296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.086 [2024-07-15 11:36:51.310497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:18:17.086 [2024-07-15 11:36:51.310522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:1312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.086 [2024-07-15 11:36:51.310539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:18:17.086 [2024-07-15 11:36:51.310586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:1328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.087 [2024-07-15 11:36:51.310606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:18:17.087 [2024-07-15 11:36:51.310632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:1344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.087 [2024-07-15 11:36:51.310649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:18:17.087 [2024-07-15 11:36:51.310675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.087 [2024-07-15 11:36:51.310693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:18:17.087 [2024-07-15 11:36:51.310719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.087 [2024-07-15 11:36:51.310736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:18:17.087 [2024-07-15 11:36:51.310762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.087 [2024-07-15 11:36:51.310779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:18:17.087 [2024-07-15 11:36:51.310804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.087 [2024-07-15 11:36:51.310822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:18:17.087 [2024-07-15 11:36:51.310847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.087 [2024-07-15 11:36:51.310876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:18:17.087 [2024-07-15 11:36:51.310903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.087 [2024-07-15 11:36:51.310920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:18:17.087 [2024-07-15 11:36:51.310946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.087 [2024-07-15 11:36:51.310963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:18:17.087 [2024-07-15 11:36:51.310989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:131064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.087 [2024-07-15 11:36:51.311006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:18:17.087 [2024-07-15 11:36:51.311032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.087 [2024-07-15 11:36:51.311049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:18:17.087 [2024-07-15 11:36:51.311075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.087 [2024-07-15 11:36:51.311092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:18:17.087 [2024-07-15 11:36:51.311118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.087 [2024-07-15 11:36:51.311135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:18:17.087 [2024-07-15 11:36:51.311160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.087 [2024-07-15 11:36:51.311178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:18:17.087 [2024-07-15 11:36:51.311203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.087 [2024-07-15 11:36:51.311220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:18:17.087 [2024-07-15 11:36:51.311246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.087 [2024-07-15 11:36:51.311263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:18:17.087 [2024-07-15 11:36:51.311289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.087 [2024-07-15 11:36:51.311306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:18:17.087 [2024-07-15 11:36:51.311332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.087 [2024-07-15 11:36:51.311349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:18:17.087 [2024-07-15 11:36:51.311374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.087 [2024-07-15 11:36:51.311400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:18:17.087 [2024-07-15 11:36:51.311427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.087 [2024-07-15 11:36:51.311445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:18:17.087 [2024-07-15 11:36:51.311470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.087 [2024-07-15 11:36:51.311487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:18:17.087 [2024-07-15 11:36:51.311513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.087 [2024-07-15 11:36:51.311530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:18:17.087 [2024-07-15 11:36:51.311570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:1000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.087 [2024-07-15 11:36:51.311590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:18:17.087 [2024-07-15 11:36:51.311616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:1032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.087 [2024-07-15 11:36:51.311634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:18:17.087 [2024-07-15 11:36:51.311660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:1064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.087 [2024-07-15 11:36:51.311677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:18:17.087 [2024-07-15 11:36:51.311703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:1096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.087 [2024-07-15 11:36:51.311720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:17.087 [2024-07-15 11:36:51.311746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:1128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.087 [2024-07-15 11:36:51.311763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:17.087 [2024-07-15 11:36:51.311788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:1160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.087 [2024-07-15 11:36:51.311805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:18:17.087 [2024-07-15 11:36:51.311831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.087 [2024-07-15 11:36:51.311848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:18:17.087 [2024-07-15 11:36:51.311873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.087 [2024-07-15 11:36:51.311891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:18:17.087 [2024-07-15 11:36:51.311917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.087 [2024-07-15 11:36:51.311934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:18:17.087 [2024-07-15 11:36:51.313582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.087 [2024-07-15 11:36:51.313619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:18:17.087 [2024-07-15 11:36:51.313654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.087 [2024-07-15 11:36:51.313674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:18:17.087 [2024-07-15 11:36:51.313700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:1024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.087 [2024-07-15 11:36:51.313718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:18:17.087 [2024-07-15 11:36:51.313744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:1056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.087 [2024-07-15 11:36:51.313762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:18:17.087 [2024-07-15 11:36:51.313788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:1088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.087 [2024-07-15 11:36:51.313805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:18:17.087 [2024-07-15 11:36:51.313849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:1120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.087 [2024-07-15 11:36:51.313870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:18:17.087 [2024-07-15 11:36:51.313897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:1152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.087 [2024-07-15 11:36:51.313914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:18:17.087 [2024-07-15 11:36:51.313940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:1192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.087 [2024-07-15 11:36:51.313958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:18:17.087 [2024-07-15 11:36:51.313983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:1368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.087 [2024-07-15 11:36:51.314000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:18:17.087 [2024-07-15 11:36:51.314027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:1384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.087 [2024-07-15 11:36:51.314045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:18:17.087 [2024-07-15 11:36:51.314071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:1400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.087 [2024-07-15 11:36:51.314088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:18:17.087 [2024-07-15 11:36:51.314114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:1416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.087 [2024-07-15 11:36:51.314131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:18:17.088 [2024-07-15 11:36:51.314157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:1432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.088 [2024-07-15 11:36:51.314189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:18:17.088 [2024-07-15 11:36:51.314217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:1448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.088 [2024-07-15 11:36:51.314235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:18:17.088 [2024-07-15 11:36:51.314261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:1464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.088 [2024-07-15 11:36:51.314278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:18:17.088 [2024-07-15 11:36:51.314303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:1480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.088 [2024-07-15 11:36:51.314321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:18:17.088 [2024-07-15 11:36:51.314346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:1496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.088 [2024-07-15 11:36:51.314364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:18:17.088 [2024-07-15 11:36:51.314389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:1512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.088 [2024-07-15 11:36:51.314406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:18:17.088 [2024-07-15 11:36:51.314432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:1528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.088 [2024-07-15 11:36:51.314449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:18:17.088 [2024-07-15 11:36:51.314474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:1544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.088 [2024-07-15 11:36:51.314492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:18:17.088 [2024-07-15 11:36:51.314518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:1208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.088 [2024-07-15 11:36:51.314535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:18:17.088 [2024-07-15 11:36:51.314576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:1240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.088 [2024-07-15 11:36:51.314596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:18:17.088 [2024-07-15 11:36:51.314622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:1272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.088 [2024-07-15 11:36:51.314639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:18:17.088 [2024-07-15 11:36:51.314665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:1304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.088 [2024-07-15 11:36:51.314682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:18:17.088 [2024-07-15 11:36:51.314707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:1200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.088 [2024-07-15 11:36:51.314735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:18:17.088 [2024-07-15 11:36:51.314762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:1232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.088 [2024-07-15 11:36:51.314780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:18:17.088 [2024-07-15 11:36:51.314806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:1264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.088 [2024-07-15 11:36:51.314824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:17.088 [2024-07-15 11:36:51.315541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:1296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.088 [2024-07-15 11:36:51.315588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:17.088 [2024-07-15 11:36:51.315620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:1328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.088 [2024-07-15 11:36:51.315639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:18:17.088 [2024-07-15 11:36:51.315667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.088 [2024-07-15 11:36:51.315684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:18:17.088 [2024-07-15 11:36:51.315710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.088 [2024-07-15 11:36:51.315727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:18:17.088 [2024-07-15 11:36:51.315752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.088 [2024-07-15 11:36:51.315769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:18:17.088 [2024-07-15 11:36:51.315795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.088 [2024-07-15 11:36:51.315812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:18:17.088 [2024-07-15 11:36:51.315838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.088 [2024-07-15 11:36:51.315855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:18:17.088 [2024-07-15 11:36:51.315880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.088 [2024-07-15 11:36:51.315898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:18:17.088 [2024-07-15 11:36:51.315924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.088 [2024-07-15 11:36:51.315941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:18:17.088 [2024-07-15 11:36:51.315966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.088 [2024-07-15 11:36:51.315983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:18:17.088 [2024-07-15 11:36:51.316023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.088 [2024-07-15 11:36:51.316042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:18:17.088 [2024-07-15 11:36:51.316068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.088 [2024-07-15 11:36:51.316086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:18:17.088 [2024-07-15 11:36:51.316111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:1000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.088 [2024-07-15 11:36:51.316128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:18:17.088 [2024-07-15 11:36:51.316159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:1064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.088 [2024-07-15 11:36:51.316176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:18:17.088 [2024-07-15 11:36:51.316202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:1128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.088 [2024-07-15 11:36:51.316219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:18:17.088 [2024-07-15 11:36:51.316244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.088 [2024-07-15 11:36:51.316261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:18:17.088 [2024-07-15 11:36:51.316287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.088 [2024-07-15 11:36:51.316304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:18:17.088 [2024-07-15 11:36:51.316329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:1568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.088 [2024-07-15 11:36:51.316346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:18:17.088 [2024-07-15 11:36:51.316371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:1584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.088 [2024-07-15 11:36:51.316389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:18:17.088 [2024-07-15 11:36:51.316414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:1600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.088 [2024-07-15 11:36:51.316432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:18:17.088 [2024-07-15 11:36:51.316457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:1336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.089 [2024-07-15 11:36:51.316474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:18:17.089 [2024-07-15 11:36:51.316500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.089 [2024-07-15 11:36:51.316517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:18:17.089 [2024-07-15 11:36:51.316566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.089 [2024-07-15 11:36:51.316588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:18:17.089 [2024-07-15 11:36:51.318880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.089 [2024-07-15 11:36:51.318911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:18:17.089 [2024-07-15 11:36:51.318939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:1056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.089 [2024-07-15 11:36:51.318955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:18:17.089 [2024-07-15 11:36:51.318983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:1120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.089 [2024-07-15 11:36:51.318998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:18:17.089 [2024-07-15 11:36:51.319019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:1192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.089 [2024-07-15 11:36:51.319034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:18:17.089 [2024-07-15 11:36:51.319056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:1384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.089 [2024-07-15 11:36:51.319070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:18:17.089 [2024-07-15 11:36:51.319091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:1416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.089 [2024-07-15 11:36:51.319106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:18:17.089 [2024-07-15 11:36:51.319127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:1448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.089 [2024-07-15 11:36:51.319142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:18:17.089 [2024-07-15 11:36:51.319163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:1480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.089 [2024-07-15 11:36:51.319178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.089 [2024-07-15 11:36:51.319199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:1512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.089 [2024-07-15 11:36:51.319214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:17.089 [2024-07-15 11:36:51.319235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:1544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.089 [2024-07-15 11:36:51.319250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:17.089 [2024-07-15 11:36:51.319271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:1240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.089 [2024-07-15 11:36:51.319285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:18:17.089 [2024-07-15 11:36:51.319307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:1304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.089 [2024-07-15 11:36:51.319334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:18:17.089 [2024-07-15 11:36:51.319357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:1232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.089 [2024-07-15 11:36:51.319372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:18:17.089 [2024-07-15 11:36:51.319394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.089 [2024-07-15 11:36:51.319408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:18:17.089 [2024-07-15 11:36:51.319430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:1016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.089 [2024-07-15 11:36:51.319445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:18:17.089 [2024-07-15 11:36:51.319466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:1080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.089 [2024-07-15 11:36:51.319480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:18:17.089 [2024-07-15 11:36:51.319512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:1144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.089 [2024-07-15 11:36:51.319527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:18:17.089 [2024-07-15 11:36:51.319562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:1328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.089 [2024-07-15 11:36:51.319580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:18:17.089 [2024-07-15 11:36:51.319603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.089 [2024-07-15 11:36:51.319618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:18:17.089 [2024-07-15 11:36:51.319639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.089 [2024-07-15 11:36:51.319653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:18:17.089 [2024-07-15 11:36:51.319674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.089 [2024-07-15 11:36:51.319689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:18:17.089 [2024-07-15 11:36:51.319710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.089 [2024-07-15 11:36:51.319725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:18:17.089 [2024-07-15 11:36:51.319745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.089 [2024-07-15 11:36:51.319760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:18:17.089 [2024-07-15 11:36:51.319781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:1064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.089 [2024-07-15 11:36:51.319804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:18:17.089 [2024-07-15 11:36:51.319827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.089 [2024-07-15 11:36:51.319842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:18:17.089 [2024-07-15 11:36:51.319863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:1568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.089 [2024-07-15 11:36:51.319878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:18:17.089 [2024-07-15 11:36:51.319899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:1600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.089 [2024-07-15 11:36:51.319914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:18:17.089 [2024-07-15 11:36:51.319935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.089 [2024-07-15 11:36:51.319950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:18:17.089 [2024-07-15 11:36:51.321191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:1360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.089 [2024-07-15 11:36:51.321221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:18:17.089 [2024-07-15 11:36:51.321249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:1392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.089 [2024-07-15 11:36:51.321265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:18:17.089 [2024-07-15 11:36:51.321287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:1424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.089 [2024-07-15 11:36:51.321302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:18:17.089 [2024-07-15 11:36:51.321323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:1456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.089 [2024-07-15 11:36:51.321338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:18:17.089 [2024-07-15 11:36:51.321359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:1624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.089 [2024-07-15 11:36:51.321374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:18:17.089 [2024-07-15 11:36:51.321395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:1640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.089 [2024-07-15 11:36:51.321409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:18:17.089 [2024-07-15 11:36:51.321430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:1656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.089 [2024-07-15 11:36:51.321445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:18:17.089 [2024-07-15 11:36:51.321466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:1672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.089 [2024-07-15 11:36:51.321481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:18:17.089 [2024-07-15 11:36:51.321515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:1688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.089 [2024-07-15 11:36:51.321532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:18:17.089 [2024-07-15 11:36:51.321567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:1704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.089 [2024-07-15 11:36:51.321585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:18:17.089 [2024-07-15 11:36:51.321608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:1720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.089 [2024-07-15 11:36:51.321623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:18:17.089 [2024-07-15 11:36:51.322130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:1736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.089 [2024-07-15 11:36:51.322162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:18:17.090 [2024-07-15 11:36:51.322189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:1752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.090 [2024-07-15 11:36:51.322206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:17.090 [2024-07-15 11:36:51.322227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:1768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.090 [2024-07-15 11:36:51.322242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:17.090 [2024-07-15 11:36:51.322264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:1784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.090 [2024-07-15 11:36:51.322279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:18:17.090 [2024-07-15 11:36:51.322305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:1800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.090 [2024-07-15 11:36:51.322321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:18:17.090 [2024-07-15 11:36:51.322348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:1816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.090 [2024-07-15 11:36:51.322364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:18:17.090 [2024-07-15 11:36:51.322385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:1488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.090 [2024-07-15 11:36:51.322400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:18:17.090 [2024-07-15 11:36:51.322421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:1520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.090 [2024-07-15 11:36:51.322436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:18:17.090 [2024-07-15 11:36:51.322458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:1552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.090 [2024-07-15 11:36:51.322472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:18:17.090 [2024-07-15 11:36:51.322493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:1216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.090 [2024-07-15 11:36:51.322521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:18:17.090 [2024-07-15 11:36:51.322559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.090 [2024-07-15 11:36:51.322578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:18:17.090 [2024-07-15 11:36:51.322600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:1344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.090 [2024-07-15 11:36:51.322615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:18:17.090 [2024-07-15 11:36:51.322637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:1056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.090 [2024-07-15 11:36:51.322651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:18:17.090 [2024-07-15 11:36:51.322673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:1192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.090 [2024-07-15 11:36:51.322687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:18:17.090 [2024-07-15 11:36:51.322708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:1416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.090 [2024-07-15 11:36:51.322723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:18:17.090 [2024-07-15 11:36:51.322744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:1480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.090 [2024-07-15 11:36:51.322759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:18:17.090 [2024-07-15 11:36:51.323422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:1544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.090 [2024-07-15 11:36:51.323450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:18:17.090 [2024-07-15 11:36:51.323492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:1304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.090 [2024-07-15 11:36:51.323511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:18:17.090 [2024-07-15 11:36:51.323534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.090 [2024-07-15 11:36:51.323565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:18:17.090 [2024-07-15 11:36:51.323590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:1080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.090 [2024-07-15 11:36:51.323605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:18:17.090 [2024-07-15 11:36:51.323628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:1328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.090 [2024-07-15 11:36:51.323643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:18:17.090 [2024-07-15 11:36:51.323664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.090 [2024-07-15 11:36:51.323692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:18:17.090 [2024-07-15 11:36:51.323715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.090 [2024-07-15 11:36:51.323730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:18:17.090 [2024-07-15 11:36:51.323752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:1064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.090 [2024-07-15 11:36:51.323766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:18:17.090 [2024-07-15 11:36:51.323788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:1568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.090 [2024-07-15 11:36:51.323802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:18:17.090 [2024-07-15 11:36:51.323823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.090 [2024-07-15 11:36:51.323838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:18:17.090 [2024-07-15 11:36:51.323859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:1032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.090 [2024-07-15 11:36:51.323874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:18:17.090 [2024-07-15 11:36:51.323895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:1824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.090 [2024-07-15 11:36:51.323910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:18:17.090 [2024-07-15 11:36:51.323931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:1840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.090 [2024-07-15 11:36:51.323945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:18:17.090 [2024-07-15 11:36:51.323966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:1160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.090 [2024-07-15 11:36:51.323981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:18:17.090 [2024-07-15 11:36:51.324002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:1576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.090 [2024-07-15 11:36:51.324016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:18:17.090 [2024-07-15 11:36:51.324038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:1608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.090 [2024-07-15 11:36:51.324052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:18:17.090 [2024-07-15 11:36:51.324073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:1400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.090 [2024-07-15 11:36:51.324088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:18:17.090 [2024-07-15 11:36:51.324109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:1464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.090 [2024-07-15 11:36:51.324124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:17.090 [2024-07-15 11:36:51.324156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:1528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.090 [2024-07-15 11:36:51.324172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:17.090 [2024-07-15 11:36:51.324194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:1392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.090 [2024-07-15 11:36:51.324208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:18:17.090 [2024-07-15 11:36:51.324229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:1456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.090 [2024-07-15 11:36:51.324244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:18:17.090 [2024-07-15 11:36:51.324265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:1640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.090 [2024-07-15 11:36:51.324280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:18:17.090 [2024-07-15 11:36:51.324301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:1672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.090 [2024-07-15 11:36:51.324315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:18:17.090 [2024-07-15 11:36:51.324336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:1704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.090 [2024-07-15 11:36:51.324351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:18:17.090 [2024-07-15 11:36:51.324372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.090 [2024-07-15 11:36:51.324386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:18:17.090 [2024-07-15 11:36:51.324407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.090 [2024-07-15 11:36:51.324422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:18:17.090 [2024-07-15 11:36:51.324443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.090 [2024-07-15 11:36:51.324457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:18:17.091 [2024-07-15 11:36:51.324479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:1128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.091 [2024-07-15 11:36:51.324494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:18:17.091 Received shutdown signal, test time was about 36.586564 seconds 00:18:17.091 00:18:17.091 Latency(us) 00:18:17.091 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:17.091 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:18:17.091 Verification LBA range: start 0x0 length 0x4000 00:18:17.091 Nvme0n1 : 36.59 8299.77 32.42 0.00 0.00 15390.79 759.62 4026531.84 00:18:17.091 =================================================================================================================== 00:18:17.091 Total : 8299.77 32.42 0.00 0.00 15390.79 759.62 4026531.84 00:18:17.091 11:36:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:17.350 11:36:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:18:17.350 11:36:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:18:17.350 11:36:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:18:17.350 11:36:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:17.350 11:36:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # sync 00:18:17.350 11:36:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:17.350 11:36:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@120 -- # set +e 00:18:17.350 11:36:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:17.350 11:36:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:17.350 rmmod nvme_tcp 00:18:17.350 rmmod nvme_fabrics 00:18:17.350 rmmod nvme_keyring 00:18:17.350 11:36:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:17.350 11:36:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set -e 00:18:17.350 11:36:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # return 0 00:18:17.350 11:36:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@489 -- # '[' -n 89174 ']' 00:18:17.350 11:36:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@490 -- # killprocess 89174 00:18:17.350 11:36:54 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@948 -- # '[' -z 89174 ']' 00:18:17.350 11:36:54 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # kill -0 89174 00:18:17.350 11:36:54 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # uname 00:18:17.350 11:36:54 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:17.350 11:36:54 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 89174 00:18:17.350 killing process with pid 89174 00:18:17.350 11:36:54 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:18:17.350 11:36:54 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:18:17.350 11:36:54 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@966 -- # echo 'killing process with pid 89174' 00:18:17.350 11:36:54 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@967 -- # kill 89174 00:18:17.350 11:36:54 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # wait 89174 00:18:17.609 11:36:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:17.609 11:36:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:18:17.609 11:36:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:18:17.609 11:36:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:17.609 11:36:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:17.609 11:36:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:17.609 11:36:54 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:17.609 11:36:54 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:17.609 11:36:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:18:17.609 00:18:17.609 real 0m42.197s 00:18:17.609 user 2m19.607s 00:18:17.609 sys 0m10.217s 00:18:17.609 11:36:54 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@1124 -- # xtrace_disable 00:18:17.609 11:36:54 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:18:17.609 ************************************ 00:18:17.609 END TEST nvmf_host_multipath_status 00:18:17.609 ************************************ 00:18:17.609 11:36:54 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:18:17.609 11:36:54 nvmf_tcp -- nvmf/nvmf.sh@103 -- # run_test nvmf_discovery_remove_ifc /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:18:17.609 11:36:54 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:18:17.609 11:36:54 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:17.609 11:36:54 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:18:17.609 ************************************ 00:18:17.609 START TEST nvmf_discovery_remove_ifc 00:18:17.609 ************************************ 00:18:17.609 11:36:54 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:18:17.609 * Looking for test storage... 00:18:17.609 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:18:17.609 11:36:55 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:18:17.609 11:36:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:18:17.609 11:36:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:17.609 11:36:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:17.609 11:36:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:17.609 11:36:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:17.609 11:36:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:17.609 11:36:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:17.609 11:36:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:17.609 11:36:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:17.609 11:36:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:17.609 11:36:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:17.609 11:36:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:891080d4-f96c-4735-b9e2-e3ce9892e421 00:18:17.609 11:36:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=891080d4-f96c-4735-b9e2-e3ce9892e421 00:18:17.609 11:36:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:17.609 11:36:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:17.609 11:36:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:18:17.609 11:36:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:17.609 11:36:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:17.609 11:36:55 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:17.609 11:36:55 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:17.609 11:36:55 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:17.609 11:36:55 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:17.609 11:36:55 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:17.609 11:36:55 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:17.609 11:36:55 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:18:17.609 11:36:55 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:17.609 11:36:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@47 -- # : 0 00:18:17.609 11:36:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:17.609 11:36:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:17.609 11:36:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:17.609 11:36:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:17.609 11:36:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:17.609 11:36:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:17.609 11:36:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:17.609 11:36:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:17.609 11:36:55 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:18:17.609 11:36:55 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:18:17.609 11:36:55 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:18:17.609 11:36:55 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:18:17.609 11:36:55 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:18:17.609 11:36:55 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:18:17.609 11:36:55 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:18:17.609 11:36:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:18:17.609 11:36:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:17.609 11:36:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@448 -- # prepare_net_devs 00:18:17.609 11:36:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:18:17.609 11:36:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:18:17.609 11:36:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:17.609 11:36:55 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:17.609 11:36:55 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:17.609 11:36:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:18:17.609 11:36:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:18:17.609 11:36:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:18:17.609 11:36:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:18:17.609 11:36:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:18:17.609 11:36:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@432 -- # nvmf_veth_init 00:18:17.609 11:36:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:17.609 11:36:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:17.609 11:36:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:18:17.609 11:36:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:18:17.609 11:36:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:18:17.609 11:36:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:18:17.609 11:36:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:18:17.609 11:36:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:17.609 11:36:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:18:17.609 11:36:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:18:17.609 11:36:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:18:17.609 11:36:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:18:17.609 11:36:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:18:17.609 11:36:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:18:17.609 Cannot find device "nvmf_tgt_br" 00:18:17.609 11:36:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@155 -- # true 00:18:17.609 11:36:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:18:17.866 Cannot find device "nvmf_tgt_br2" 00:18:17.866 11:36:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@156 -- # true 00:18:17.866 11:36:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:18:17.866 11:36:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:18:17.866 Cannot find device "nvmf_tgt_br" 00:18:17.866 11:36:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@158 -- # true 00:18:17.866 11:36:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:18:17.866 Cannot find device "nvmf_tgt_br2" 00:18:17.866 11:36:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@159 -- # true 00:18:17.866 11:36:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:18:17.866 11:36:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:18:17.866 11:36:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:17.866 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:17.866 11:36:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@162 -- # true 00:18:17.866 11:36:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:17.866 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:17.866 11:36:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@163 -- # true 00:18:17.866 11:36:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:18:17.866 11:36:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:18:17.866 11:36:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:18:17.866 11:36:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:18:17.866 11:36:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:18:17.866 11:36:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:18:17.866 11:36:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:18:17.866 11:36:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:18:17.866 11:36:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:18:17.866 11:36:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:18:17.866 11:36:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:18:17.866 11:36:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:18:17.866 11:36:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:18:17.866 11:36:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:18:17.866 11:36:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:18:17.866 11:36:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:18:17.866 11:36:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:18:17.867 11:36:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:18:17.867 11:36:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:18:17.867 11:36:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:18:17.867 11:36:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:18:18.125 11:36:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:18:18.125 11:36:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:18:18.125 11:36:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:18:18.125 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:18.125 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.099 ms 00:18:18.125 00:18:18.125 --- 10.0.0.2 ping statistics --- 00:18:18.125 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:18.125 rtt min/avg/max/mdev = 0.099/0.099/0.099/0.000 ms 00:18:18.125 11:36:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:18:18.125 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:18:18.125 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.084 ms 00:18:18.125 00:18:18.125 --- 10.0.0.3 ping statistics --- 00:18:18.125 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:18.125 rtt min/avg/max/mdev = 0.084/0.084/0.084/0.000 ms 00:18:18.125 11:36:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:18:18.125 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:18.125 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.085 ms 00:18:18.125 00:18:18.125 --- 10.0.0.1 ping statistics --- 00:18:18.125 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:18.125 rtt min/avg/max/mdev = 0.085/0.085/0.085/0.000 ms 00:18:18.125 11:36:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:18.125 11:36:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@433 -- # return 0 00:18:18.125 11:36:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:18:18.125 11:36:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:18.125 11:36:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:18:18.125 11:36:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:18:18.125 11:36:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:18.125 11:36:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:18:18.125 11:36:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:18:18.125 11:36:55 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:18:18.125 11:36:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:18.125 11:36:55 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@722 -- # xtrace_disable 00:18:18.125 11:36:55 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:18:18.125 11:36:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@481 -- # nvmfpid=90597 00:18:18.125 11:36:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # waitforlisten 90597 00:18:18.125 11:36:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:18:18.125 11:36:55 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@829 -- # '[' -z 90597 ']' 00:18:18.125 11:36:55 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:18.125 11:36:55 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:18.125 11:36:55 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:18.125 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:18.125 11:36:55 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:18.125 11:36:55 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:18:18.125 [2024-07-15 11:36:55.454202] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:18:18.125 [2024-07-15 11:36:55.454304] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:18.125 [2024-07-15 11:36:55.588986] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:18.384 [2024-07-15 11:36:55.649199] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:18.384 [2024-07-15 11:36:55.649255] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:18.384 [2024-07-15 11:36:55.649267] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:18.384 [2024-07-15 11:36:55.649276] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:18.384 [2024-07-15 11:36:55.649283] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:18.384 [2024-07-15 11:36:55.649316] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:18.384 11:36:55 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:18.384 11:36:55 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@862 -- # return 0 00:18:18.384 11:36:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:18.384 11:36:55 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@728 -- # xtrace_disable 00:18:18.384 11:36:55 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:18:18.384 11:36:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:18.384 11:36:55 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:18:18.384 11:36:55 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:18.384 11:36:55 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:18:18.384 [2024-07-15 11:36:55.782408] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:18.384 [2024-07-15 11:36:55.790539] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:18:18.384 null0 00:18:18.384 [2024-07-15 11:36:55.822514] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:18.384 11:36:55 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:18.384 11:36:55 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=90639 00:18:18.384 11:36:55 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:18:18.384 11:36:55 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 90639 /tmp/host.sock 00:18:18.384 11:36:55 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@829 -- # '[' -z 90639 ']' 00:18:18.384 11:36:55 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@833 -- # local rpc_addr=/tmp/host.sock 00:18:18.384 11:36:55 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:18.384 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:18:18.384 11:36:55 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:18:18.384 11:36:55 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:18.384 11:36:55 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:18:18.643 [2024-07-15 11:36:55.918028] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:18:18.643 [2024-07-15 11:36:55.918165] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90639 ] 00:18:18.643 [2024-07-15 11:36:56.063905] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:18.919 [2024-07-15 11:36:56.122226] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:19.486 11:36:56 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:19.486 11:36:56 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@862 -- # return 0 00:18:19.486 11:36:56 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:18:19.486 11:36:56 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:18:19.486 11:36:56 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:19.486 11:36:56 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:18:19.486 11:36:56 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:19.486 11:36:56 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:18:19.486 11:36:56 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:19.486 11:36:56 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:18:19.745 11:36:57 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:19.745 11:36:57 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:18:19.745 11:36:57 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:19.745 11:36:57 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:18:20.680 [2024-07-15 11:36:58.015736] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:18:20.680 [2024-07-15 11:36:58.015781] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:18:20.680 [2024-07-15 11:36:58.015802] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:18:20.680 [2024-07-15 11:36:58.101890] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:18:20.938 [2024-07-15 11:36:58.158859] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:18:20.938 [2024-07-15 11:36:58.158945] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:18:20.938 [2024-07-15 11:36:58.158974] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:18:20.938 [2024-07-15 11:36:58.158993] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:18:20.938 [2024-07-15 11:36:58.159020] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:18:20.938 11:36:58 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:20.938 11:36:58 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:18:20.938 11:36:58 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:18:20.938 11:36:58 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:20.938 11:36:58 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:18:20.938 11:36:58 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:20.938 11:36:58 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:18:20.938 11:36:58 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:18:20.938 11:36:58 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:18:20.938 [2024-07-15 11:36:58.164700] bdev_nvme.c:1617:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x234b650 was disconnected and freed. delete nvme_qpair. 00:18:20.938 11:36:58 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:20.938 11:36:58 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:18:20.938 11:36:58 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec nvmf_tgt_ns_spdk ip addr del 10.0.0.2/24 dev nvmf_tgt_if 00:18:20.938 11:36:58 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if down 00:18:20.938 11:36:58 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:18:20.938 11:36:58 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:18:20.938 11:36:58 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:20.938 11:36:58 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:20.938 11:36:58 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:18:20.938 11:36:58 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:18:20.938 11:36:58 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:18:20.938 11:36:58 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:18:20.938 11:36:58 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:20.938 11:36:58 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:18:20.938 11:36:58 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:18:21.872 11:36:59 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:18:21.872 11:36:59 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:21.872 11:36:59 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:18:21.872 11:36:59 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:21.872 11:36:59 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:18:21.872 11:36:59 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:18:21.872 11:36:59 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:18:21.872 11:36:59 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:22.129 11:36:59 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:18:22.129 11:36:59 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:18:23.062 11:37:00 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:18:23.062 11:37:00 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:23.062 11:37:00 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:23.062 11:37:00 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:18:23.062 11:37:00 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:18:23.062 11:37:00 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:18:23.062 11:37:00 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:18:23.062 11:37:00 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:23.062 11:37:00 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:18:23.062 11:37:00 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:18:23.995 11:37:01 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:18:23.995 11:37:01 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:23.995 11:37:01 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:23.995 11:37:01 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:18:23.995 11:37:01 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:18:23.995 11:37:01 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:18:23.995 11:37:01 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:18:23.995 11:37:01 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:24.253 11:37:01 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:18:24.253 11:37:01 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:18:25.186 11:37:02 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:18:25.186 11:37:02 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:25.186 11:37:02 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:25.186 11:37:02 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:18:25.186 11:37:02 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:18:25.186 11:37:02 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:18:25.186 11:37:02 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:18:25.186 11:37:02 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:25.186 11:37:02 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:18:25.186 11:37:02 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:18:26.122 11:37:03 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:18:26.122 11:37:03 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:26.122 11:37:03 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:18:26.122 11:37:03 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:26.122 11:37:03 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:18:26.122 11:37:03 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:18:26.122 11:37:03 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:18:26.122 11:37:03 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:26.122 [2024-07-15 11:37:03.597242] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:18:26.122 [2024-07-15 11:37:03.597320] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:18:26.122 [2024-07-15 11:37:03.597337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:26.122 [2024-07-15 11:37:03.597351] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:18:26.122 [2024-07-15 11:37:03.597361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:26.122 [2024-07-15 11:37:03.597371] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:18:26.122 [2024-07-15 11:37:03.597380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:26.122 [2024-07-15 11:37:03.597391] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:18:26.122 [2024-07-15 11:37:03.597400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:26.122 [2024-07-15 11:37:03.597410] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:18:26.122 [2024-07-15 11:37:03.597419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:26.122 [2024-07-15 11:37:03.597429] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2314900 is same with the state(5) to be set 00:18:26.381 [2024-07-15 11:37:03.607237] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2314900 (9): Bad file descriptor 00:18:26.381 [2024-07-15 11:37:03.617264] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:18:26.381 11:37:03 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:18:26.381 11:37:03 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:18:27.315 11:37:04 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:18:27.315 11:37:04 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:27.315 [2024-07-15 11:37:04.636347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:18:27.315 [2024-07-15 11:37:04.636435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2314900 with addr=10.0.0.2, port=4420 00:18:27.315 [2024-07-15 11:37:04.636467] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2314900 is same with the state(5) to be set 00:18:27.315 [2024-07-15 11:37:04.636531] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2314900 (9): Bad file descriptor 00:18:27.316 11:37:04 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:18:27.316 [2024-07-15 11:37:04.636662] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:27.316 [2024-07-15 11:37:04.636709] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:18:27.316 11:37:04 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:18:27.316 [2024-07-15 11:37:04.636729] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:18:27.316 [2024-07-15 11:37:04.636748] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:18:27.316 [2024-07-15 11:37:04.636787] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:27.316 [2024-07-15 11:37:04.636806] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:18:27.316 11:37:04 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:18:27.316 11:37:04 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:27.316 11:37:04 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:18:27.316 11:37:04 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:27.316 11:37:04 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:18:27.316 11:37:04 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:18:28.249 [2024-07-15 11:37:05.636867] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:18:28.249 [2024-07-15 11:37:05.636945] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:18:28.249 [2024-07-15 11:37:05.636959] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:18:28.249 [2024-07-15 11:37:05.636970] nvme_ctrlr.c:1094:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] already in failed state 00:18:28.249 [2024-07-15 11:37:05.636997] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:28.249 [2024-07-15 11:37:05.637029] bdev_nvme.c:6734:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:18:28.249 [2024-07-15 11:37:05.637101] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:18:28.249 [2024-07-15 11:37:05.637119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:28.249 [2024-07-15 11:37:05.637133] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:18:28.249 [2024-07-15 11:37:05.637142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:28.249 [2024-07-15 11:37:05.637153] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:18:28.249 [2024-07-15 11:37:05.637162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:28.249 [2024-07-15 11:37:05.637172] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:18:28.249 [2024-07-15 11:37:05.637181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:28.249 [2024-07-15 11:37:05.637192] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:18:28.249 [2024-07-15 11:37:05.637201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:28.249 [2024-07-15 11:37:05.637210] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:18:28.249 [2024-07-15 11:37:05.637308] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22b73e0 (9): Bad file descriptor 00:18:28.249 [2024-07-15 11:37:05.638318] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:18:28.249 [2024-07-15 11:37:05.638344] nvme_ctrlr.c:1213:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Failed to read the CC register 00:18:28.249 11:37:05 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:18:28.249 11:37:05 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:28.249 11:37:05 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:28.249 11:37:05 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:18:28.249 11:37:05 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:18:28.249 11:37:05 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:18:28.249 11:37:05 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:18:28.249 11:37:05 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:28.506 11:37:05 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:18:28.506 11:37:05 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:18:28.506 11:37:05 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:18:28.506 11:37:05 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:18:28.506 11:37:05 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:18:28.506 11:37:05 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:28.506 11:37:05 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:18:28.506 11:37:05 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:28.506 11:37:05 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:18:28.506 11:37:05 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:18:28.506 11:37:05 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:18:28.506 11:37:05 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:28.506 11:37:05 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:18:28.506 11:37:05 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:18:29.440 11:37:06 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:18:29.440 11:37:06 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:29.440 11:37:06 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:18:29.440 11:37:06 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:29.440 11:37:06 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:18:29.440 11:37:06 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:18:29.440 11:37:06 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:18:29.440 11:37:06 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:29.440 11:37:06 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:18:29.440 11:37:06 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:18:30.375 [2024-07-15 11:37:07.647594] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:18:30.375 [2024-07-15 11:37:07.647635] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:18:30.375 [2024-07-15 11:37:07.647656] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:18:30.375 [2024-07-15 11:37:07.734733] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:18:30.375 [2024-07-15 11:37:07.790931] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:18:30.375 [2024-07-15 11:37:07.790999] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:18:30.375 [2024-07-15 11:37:07.791023] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:18:30.375 [2024-07-15 11:37:07.791042] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:18:30.375 [2024-07-15 11:37:07.791052] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:18:30.375 [2024-07-15 11:37:07.796192] bdev_nvme.c:1617:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x2330300 was disconnected and freed. delete nvme_qpair. 00:18:30.633 11:37:07 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:18:30.633 11:37:07 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:18:30.633 11:37:07 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:30.633 11:37:07 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:30.633 11:37:07 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:18:30.633 11:37:07 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:18:30.633 11:37:07 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:18:30.633 11:37:07 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:30.633 11:37:07 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:18:30.633 11:37:07 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:18:30.633 11:37:07 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 90639 00:18:30.633 11:37:07 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@948 -- # '[' -z 90639 ']' 00:18:30.633 11:37:07 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # kill -0 90639 00:18:30.633 11:37:07 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # uname 00:18:30.633 11:37:07 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:30.633 11:37:07 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 90639 00:18:30.633 killing process with pid 90639 00:18:30.633 11:37:07 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:18:30.633 11:37:07 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:18:30.633 11:37:07 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 90639' 00:18:30.633 11:37:07 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@967 -- # kill 90639 00:18:30.633 11:37:07 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # wait 90639 00:18:30.920 11:37:08 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:18:30.921 11:37:08 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:30.921 11:37:08 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@117 -- # sync 00:18:30.921 11:37:08 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:30.921 11:37:08 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@120 -- # set +e 00:18:30.921 11:37:08 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:30.921 11:37:08 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:30.921 rmmod nvme_tcp 00:18:30.921 rmmod nvme_fabrics 00:18:30.921 rmmod nvme_keyring 00:18:30.921 11:37:08 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:30.921 11:37:08 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set -e 00:18:30.921 11:37:08 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # return 0 00:18:30.921 11:37:08 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@489 -- # '[' -n 90597 ']' 00:18:30.921 11:37:08 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@490 -- # killprocess 90597 00:18:30.921 11:37:08 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@948 -- # '[' -z 90597 ']' 00:18:30.921 11:37:08 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # kill -0 90597 00:18:30.921 11:37:08 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # uname 00:18:30.921 11:37:08 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:30.921 11:37:08 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 90597 00:18:30.921 killing process with pid 90597 00:18:30.921 11:37:08 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:18:30.921 11:37:08 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:18:30.921 11:37:08 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 90597' 00:18:30.921 11:37:08 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@967 -- # kill 90597 00:18:30.921 11:37:08 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # wait 90597 00:18:31.192 11:37:08 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:31.192 11:37:08 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:18:31.192 11:37:08 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:18:31.192 11:37:08 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:31.192 11:37:08 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:31.192 11:37:08 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:31.192 11:37:08 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:31.192 11:37:08 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:31.192 11:37:08 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:18:31.192 ************************************ 00:18:31.192 END TEST nvmf_discovery_remove_ifc 00:18:31.192 ************************************ 00:18:31.192 00:18:31.192 real 0m13.510s 00:18:31.192 user 0m24.750s 00:18:31.192 sys 0m1.561s 00:18:31.192 11:37:08 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:18:31.192 11:37:08 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:18:31.192 11:37:08 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:18:31.193 11:37:08 nvmf_tcp -- nvmf/nvmf.sh@104 -- # run_test nvmf_identify_kernel_target /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:18:31.193 11:37:08 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:18:31.193 11:37:08 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:31.193 11:37:08 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:18:31.193 ************************************ 00:18:31.193 START TEST nvmf_identify_kernel_target 00:18:31.193 ************************************ 00:18:31.193 11:37:08 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:18:31.193 * Looking for test storage... 00:18:31.193 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:18:31.193 11:37:08 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:18:31.193 11:37:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:18:31.193 11:37:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:31.193 11:37:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:31.193 11:37:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:31.193 11:37:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:31.193 11:37:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:31.193 11:37:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:31.193 11:37:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:31.193 11:37:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:31.193 11:37:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:31.193 11:37:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:31.193 11:37:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:891080d4-f96c-4735-b9e2-e3ce9892e421 00:18:31.193 11:37:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=891080d4-f96c-4735-b9e2-e3ce9892e421 00:18:31.193 11:37:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:31.193 11:37:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:31.193 11:37:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:18:31.193 11:37:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:31.193 11:37:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:31.193 11:37:08 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:31.193 11:37:08 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:31.194 11:37:08 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:31.194 11:37:08 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:31.194 11:37:08 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:31.194 11:37:08 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:31.194 11:37:08 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:18:31.194 11:37:08 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:31.194 11:37:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@47 -- # : 0 00:18:31.194 11:37:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:31.194 11:37:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:31.194 11:37:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:31.194 11:37:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:31.194 11:37:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:31.194 11:37:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:31.194 11:37:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:31.194 11:37:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:31.194 11:37:08 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:18:31.194 11:37:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:18:31.194 11:37:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:31.194 11:37:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:18:31.194 11:37:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:18:31.194 11:37:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:18:31.194 11:37:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:31.194 11:37:08 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:31.194 11:37:08 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:31.194 11:37:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:18:31.194 11:37:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:18:31.194 11:37:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:18:31.194 11:37:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:18:31.194 11:37:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:18:31.194 11:37:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@432 -- # nvmf_veth_init 00:18:31.194 11:37:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:31.194 11:37:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:31.194 11:37:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:18:31.194 11:37:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:18:31.194 11:37:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:18:31.194 11:37:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:18:31.194 11:37:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:18:31.194 11:37:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:31.194 11:37:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:18:31.194 11:37:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:18:31.194 11:37:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:18:31.194 11:37:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:18:31.194 11:37:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:18:31.194 11:37:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:18:31.194 Cannot find device "nvmf_tgt_br" 00:18:31.194 11:37:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@155 -- # true 00:18:31.194 11:37:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:18:31.194 Cannot find device "nvmf_tgt_br2" 00:18:31.194 11:37:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@156 -- # true 00:18:31.194 11:37:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:18:31.194 11:37:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:18:31.194 Cannot find device "nvmf_tgt_br" 00:18:31.194 11:37:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@158 -- # true 00:18:31.194 11:37:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:18:31.194 Cannot find device "nvmf_tgt_br2" 00:18:31.461 11:37:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@159 -- # true 00:18:31.461 11:37:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:18:31.461 11:37:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:18:31.461 11:37:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:31.461 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:31.461 11:37:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@162 -- # true 00:18:31.461 11:37:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:31.461 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:31.461 11:37:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@163 -- # true 00:18:31.461 11:37:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:18:31.461 11:37:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:18:31.461 11:37:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:18:31.461 11:37:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:18:31.461 11:37:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:18:31.461 11:37:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:18:31.461 11:37:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:18:31.461 11:37:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:18:31.461 11:37:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:18:31.461 11:37:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:18:31.461 11:37:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:18:31.461 11:37:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:18:31.461 11:37:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:18:31.461 11:37:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:18:31.461 11:37:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:18:31.461 11:37:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:18:31.461 11:37:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:18:31.461 11:37:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:18:31.461 11:37:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:18:31.461 11:37:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:18:31.461 11:37:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:18:31.461 11:37:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:18:31.461 11:37:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:18:31.461 11:37:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:18:31.461 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:31.461 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.130 ms 00:18:31.461 00:18:31.461 --- 10.0.0.2 ping statistics --- 00:18:31.461 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:31.461 rtt min/avg/max/mdev = 0.130/0.130/0.130/0.000 ms 00:18:31.461 11:37:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:18:31.461 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:18:31.461 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.061 ms 00:18:31.461 00:18:31.461 --- 10.0.0.3 ping statistics --- 00:18:31.461 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:31.461 rtt min/avg/max/mdev = 0.061/0.061/0.061/0.000 ms 00:18:31.461 11:37:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:18:31.461 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:31.461 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.035 ms 00:18:31.461 00:18:31.461 --- 10.0.0.1 ping statistics --- 00:18:31.461 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:31.461 rtt min/avg/max/mdev = 0.035/0.035/0.035/0.000 ms 00:18:31.461 11:37:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:31.461 11:37:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@433 -- # return 0 00:18:31.461 11:37:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:18:31.461 11:37:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:31.461 11:37:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:18:31.461 11:37:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:18:31.461 11:37:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:31.461 11:37:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:18:31.461 11:37:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:18:31.720 11:37:08 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:18:31.720 11:37:08 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:18:31.720 11:37:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@741 -- # local ip 00:18:31.720 11:37:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:31.720 11:37:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:31.720 11:37:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:31.720 11:37:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:31.720 11:37:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:31.720 11:37:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:31.720 11:37:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:31.720 11:37:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:31.720 11:37:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:31.720 11:37:08 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:18:31.720 11:37:08 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:18:31.720 11:37:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:18:31.720 11:37:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:18:31.720 11:37:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:18:31.720 11:37:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:18:31.720 11:37:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:18:31.720 11:37:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@639 -- # local block nvme 00:18:31.720 11:37:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:18:31.720 11:37:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@642 -- # modprobe nvmet 00:18:31.720 11:37:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:18:31.720 11:37:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@647 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:18:31.977 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:18:31.977 Waiting for block devices as requested 00:18:31.978 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:18:31.978 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:18:32.235 11:37:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:18:32.235 11:37:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:18:32.235 11:37:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:18:32.235 11:37:09 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:18:32.235 11:37:09 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:18:32.235 11:37:09 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:18:32.235 11:37:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:18:32.235 11:37:09 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:18:32.235 11:37:09 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:18:32.235 No valid GPT data, bailing 00:18:32.235 11:37:09 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:18:32.235 11:37:09 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 00:18:32.235 11:37:09 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 00:18:32.235 11:37:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:18:32.235 11:37:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:18:32.235 11:37:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n2 ]] 00:18:32.235 11:37:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # is_block_zoned nvme0n2 00:18:32.235 11:37:09 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1662 -- # local device=nvme0n2 00:18:32.235 11:37:09 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:18:32.235 11:37:09 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:18:32.235 11:37:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # block_in_use nvme0n2 00:18:32.236 11:37:09 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme0n2 pt 00:18:32.236 11:37:09 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:18:32.236 No valid GPT data, bailing 00:18:32.236 11:37:09 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:18:32.236 11:37:09 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 00:18:32.236 11:37:09 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 00:18:32.236 11:37:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n2 00:18:32.236 11:37:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:18:32.236 11:37:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n3 ]] 00:18:32.236 11:37:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # is_block_zoned nvme0n3 00:18:32.236 11:37:09 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1662 -- # local device=nvme0n3 00:18:32.236 11:37:09 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:18:32.236 11:37:09 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:18:32.236 11:37:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # block_in_use nvme0n3 00:18:32.236 11:37:09 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme0n3 pt 00:18:32.236 11:37:09 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:18:32.236 No valid GPT data, bailing 00:18:32.236 11:37:09 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:18:32.236 11:37:09 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 00:18:32.236 11:37:09 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 00:18:32.236 11:37:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n3 00:18:32.494 11:37:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:18:32.494 11:37:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme1n1 ]] 00:18:32.494 11:37:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # is_block_zoned nvme1n1 00:18:32.494 11:37:09 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1662 -- # local device=nvme1n1 00:18:32.494 11:37:09 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:18:32.494 11:37:09 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:18:32.494 11:37:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # block_in_use nvme1n1 00:18:32.494 11:37:09 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme1n1 pt 00:18:32.494 11:37:09 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:18:32.494 No valid GPT data, bailing 00:18:32.494 11:37:09 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:18:32.494 11:37:09 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 00:18:32.494 11:37:09 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 00:18:32.494 11:37:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # nvme=/dev/nvme1n1 00:18:32.494 11:37:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # [[ -b /dev/nvme1n1 ]] 00:18:32.494 11:37:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:18:32.494 11:37:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:18:32.494 11:37:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:18:32.494 11:37:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:18:32.494 11:37:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # echo 1 00:18:32.494 11:37:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@668 -- # echo /dev/nvme1n1 00:18:32.494 11:37:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # echo 1 00:18:32.494 11:37:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:18:32.494 11:37:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@672 -- # echo tcp 00:18:32.494 11:37:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # echo 4420 00:18:32.494 11:37:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@674 -- # echo ipv4 00:18:32.494 11:37:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:18:32.494 11:37:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:891080d4-f96c-4735-b9e2-e3ce9892e421 --hostid=891080d4-f96c-4735-b9e2-e3ce9892e421 -a 10.0.0.1 -t tcp -s 4420 00:18:32.494 00:18:32.494 Discovery Log Number of Records 2, Generation counter 2 00:18:32.494 =====Discovery Log Entry 0====== 00:18:32.494 trtype: tcp 00:18:32.494 adrfam: ipv4 00:18:32.494 subtype: current discovery subsystem 00:18:32.494 treq: not specified, sq flow control disable supported 00:18:32.494 portid: 1 00:18:32.494 trsvcid: 4420 00:18:32.494 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:18:32.494 traddr: 10.0.0.1 00:18:32.494 eflags: none 00:18:32.494 sectype: none 00:18:32.494 =====Discovery Log Entry 1====== 00:18:32.494 trtype: tcp 00:18:32.494 adrfam: ipv4 00:18:32.494 subtype: nvme subsystem 00:18:32.494 treq: not specified, sq flow control disable supported 00:18:32.494 portid: 1 00:18:32.494 trsvcid: 4420 00:18:32.494 subnqn: nqn.2016-06.io.spdk:testnqn 00:18:32.494 traddr: 10.0.0.1 00:18:32.494 eflags: none 00:18:32.494 sectype: none 00:18:32.494 11:37:09 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:18:32.494 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:18:32.754 ===================================================== 00:18:32.754 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:18:32.754 ===================================================== 00:18:32.754 Controller Capabilities/Features 00:18:32.754 ================================ 00:18:32.754 Vendor ID: 0000 00:18:32.754 Subsystem Vendor ID: 0000 00:18:32.754 Serial Number: a8064d945169bab2fd79 00:18:32.754 Model Number: Linux 00:18:32.754 Firmware Version: 6.7.0-68 00:18:32.754 Recommended Arb Burst: 0 00:18:32.754 IEEE OUI Identifier: 00 00 00 00:18:32.754 Multi-path I/O 00:18:32.754 May have multiple subsystem ports: No 00:18:32.754 May have multiple controllers: No 00:18:32.754 Associated with SR-IOV VF: No 00:18:32.754 Max Data Transfer Size: Unlimited 00:18:32.754 Max Number of Namespaces: 0 00:18:32.754 Max Number of I/O Queues: 1024 00:18:32.754 NVMe Specification Version (VS): 1.3 00:18:32.754 NVMe Specification Version (Identify): 1.3 00:18:32.754 Maximum Queue Entries: 1024 00:18:32.754 Contiguous Queues Required: No 00:18:32.754 Arbitration Mechanisms Supported 00:18:32.754 Weighted Round Robin: Not Supported 00:18:32.754 Vendor Specific: Not Supported 00:18:32.754 Reset Timeout: 7500 ms 00:18:32.754 Doorbell Stride: 4 bytes 00:18:32.754 NVM Subsystem Reset: Not Supported 00:18:32.754 Command Sets Supported 00:18:32.754 NVM Command Set: Supported 00:18:32.754 Boot Partition: Not Supported 00:18:32.754 Memory Page Size Minimum: 4096 bytes 00:18:32.754 Memory Page Size Maximum: 4096 bytes 00:18:32.754 Persistent Memory Region: Not Supported 00:18:32.754 Optional Asynchronous Events Supported 00:18:32.754 Namespace Attribute Notices: Not Supported 00:18:32.754 Firmware Activation Notices: Not Supported 00:18:32.754 ANA Change Notices: Not Supported 00:18:32.754 PLE Aggregate Log Change Notices: Not Supported 00:18:32.754 LBA Status Info Alert Notices: Not Supported 00:18:32.754 EGE Aggregate Log Change Notices: Not Supported 00:18:32.754 Normal NVM Subsystem Shutdown event: Not Supported 00:18:32.754 Zone Descriptor Change Notices: Not Supported 00:18:32.754 Discovery Log Change Notices: Supported 00:18:32.754 Controller Attributes 00:18:32.754 128-bit Host Identifier: Not Supported 00:18:32.754 Non-Operational Permissive Mode: Not Supported 00:18:32.754 NVM Sets: Not Supported 00:18:32.754 Read Recovery Levels: Not Supported 00:18:32.754 Endurance Groups: Not Supported 00:18:32.754 Predictable Latency Mode: Not Supported 00:18:32.754 Traffic Based Keep ALive: Not Supported 00:18:32.754 Namespace Granularity: Not Supported 00:18:32.754 SQ Associations: Not Supported 00:18:32.754 UUID List: Not Supported 00:18:32.755 Multi-Domain Subsystem: Not Supported 00:18:32.755 Fixed Capacity Management: Not Supported 00:18:32.755 Variable Capacity Management: Not Supported 00:18:32.755 Delete Endurance Group: Not Supported 00:18:32.755 Delete NVM Set: Not Supported 00:18:32.755 Extended LBA Formats Supported: Not Supported 00:18:32.755 Flexible Data Placement Supported: Not Supported 00:18:32.755 00:18:32.755 Controller Memory Buffer Support 00:18:32.755 ================================ 00:18:32.755 Supported: No 00:18:32.755 00:18:32.755 Persistent Memory Region Support 00:18:32.755 ================================ 00:18:32.755 Supported: No 00:18:32.755 00:18:32.755 Admin Command Set Attributes 00:18:32.755 ============================ 00:18:32.755 Security Send/Receive: Not Supported 00:18:32.755 Format NVM: Not Supported 00:18:32.755 Firmware Activate/Download: Not Supported 00:18:32.755 Namespace Management: Not Supported 00:18:32.755 Device Self-Test: Not Supported 00:18:32.755 Directives: Not Supported 00:18:32.755 NVMe-MI: Not Supported 00:18:32.755 Virtualization Management: Not Supported 00:18:32.755 Doorbell Buffer Config: Not Supported 00:18:32.755 Get LBA Status Capability: Not Supported 00:18:32.755 Command & Feature Lockdown Capability: Not Supported 00:18:32.755 Abort Command Limit: 1 00:18:32.755 Async Event Request Limit: 1 00:18:32.755 Number of Firmware Slots: N/A 00:18:32.755 Firmware Slot 1 Read-Only: N/A 00:18:32.755 Firmware Activation Without Reset: N/A 00:18:32.755 Multiple Update Detection Support: N/A 00:18:32.755 Firmware Update Granularity: No Information Provided 00:18:32.755 Per-Namespace SMART Log: No 00:18:32.755 Asymmetric Namespace Access Log Page: Not Supported 00:18:32.755 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:18:32.755 Command Effects Log Page: Not Supported 00:18:32.755 Get Log Page Extended Data: Supported 00:18:32.755 Telemetry Log Pages: Not Supported 00:18:32.755 Persistent Event Log Pages: Not Supported 00:18:32.755 Supported Log Pages Log Page: May Support 00:18:32.755 Commands Supported & Effects Log Page: Not Supported 00:18:32.755 Feature Identifiers & Effects Log Page:May Support 00:18:32.755 NVMe-MI Commands & Effects Log Page: May Support 00:18:32.755 Data Area 4 for Telemetry Log: Not Supported 00:18:32.755 Error Log Page Entries Supported: 1 00:18:32.755 Keep Alive: Not Supported 00:18:32.755 00:18:32.755 NVM Command Set Attributes 00:18:32.755 ========================== 00:18:32.755 Submission Queue Entry Size 00:18:32.755 Max: 1 00:18:32.755 Min: 1 00:18:32.755 Completion Queue Entry Size 00:18:32.755 Max: 1 00:18:32.755 Min: 1 00:18:32.755 Number of Namespaces: 0 00:18:32.755 Compare Command: Not Supported 00:18:32.755 Write Uncorrectable Command: Not Supported 00:18:32.755 Dataset Management Command: Not Supported 00:18:32.755 Write Zeroes Command: Not Supported 00:18:32.755 Set Features Save Field: Not Supported 00:18:32.755 Reservations: Not Supported 00:18:32.755 Timestamp: Not Supported 00:18:32.755 Copy: Not Supported 00:18:32.755 Volatile Write Cache: Not Present 00:18:32.755 Atomic Write Unit (Normal): 1 00:18:32.755 Atomic Write Unit (PFail): 1 00:18:32.755 Atomic Compare & Write Unit: 1 00:18:32.755 Fused Compare & Write: Not Supported 00:18:32.755 Scatter-Gather List 00:18:32.755 SGL Command Set: Supported 00:18:32.755 SGL Keyed: Not Supported 00:18:32.755 SGL Bit Bucket Descriptor: Not Supported 00:18:32.755 SGL Metadata Pointer: Not Supported 00:18:32.755 Oversized SGL: Not Supported 00:18:32.755 SGL Metadata Address: Not Supported 00:18:32.755 SGL Offset: Supported 00:18:32.755 Transport SGL Data Block: Not Supported 00:18:32.755 Replay Protected Memory Block: Not Supported 00:18:32.755 00:18:32.755 Firmware Slot Information 00:18:32.755 ========================= 00:18:32.755 Active slot: 0 00:18:32.755 00:18:32.755 00:18:32.755 Error Log 00:18:32.755 ========= 00:18:32.755 00:18:32.755 Active Namespaces 00:18:32.755 ================= 00:18:32.755 Discovery Log Page 00:18:32.755 ================== 00:18:32.755 Generation Counter: 2 00:18:32.755 Number of Records: 2 00:18:32.755 Record Format: 0 00:18:32.755 00:18:32.755 Discovery Log Entry 0 00:18:32.755 ---------------------- 00:18:32.755 Transport Type: 3 (TCP) 00:18:32.755 Address Family: 1 (IPv4) 00:18:32.755 Subsystem Type: 3 (Current Discovery Subsystem) 00:18:32.755 Entry Flags: 00:18:32.755 Duplicate Returned Information: 0 00:18:32.755 Explicit Persistent Connection Support for Discovery: 0 00:18:32.755 Transport Requirements: 00:18:32.755 Secure Channel: Not Specified 00:18:32.755 Port ID: 1 (0x0001) 00:18:32.755 Controller ID: 65535 (0xffff) 00:18:32.755 Admin Max SQ Size: 32 00:18:32.755 Transport Service Identifier: 4420 00:18:32.755 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:18:32.755 Transport Address: 10.0.0.1 00:18:32.755 Discovery Log Entry 1 00:18:32.755 ---------------------- 00:18:32.755 Transport Type: 3 (TCP) 00:18:32.755 Address Family: 1 (IPv4) 00:18:32.755 Subsystem Type: 2 (NVM Subsystem) 00:18:32.755 Entry Flags: 00:18:32.755 Duplicate Returned Information: 0 00:18:32.755 Explicit Persistent Connection Support for Discovery: 0 00:18:32.755 Transport Requirements: 00:18:32.755 Secure Channel: Not Specified 00:18:32.755 Port ID: 1 (0x0001) 00:18:32.755 Controller ID: 65535 (0xffff) 00:18:32.755 Admin Max SQ Size: 32 00:18:32.755 Transport Service Identifier: 4420 00:18:32.755 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:18:32.755 Transport Address: 10.0.0.1 00:18:32.755 11:37:10 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:18:32.756 get_feature(0x01) failed 00:18:32.756 get_feature(0x02) failed 00:18:32.756 get_feature(0x04) failed 00:18:32.756 ===================================================== 00:18:32.756 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:18:32.756 ===================================================== 00:18:32.756 Controller Capabilities/Features 00:18:32.756 ================================ 00:18:32.756 Vendor ID: 0000 00:18:32.756 Subsystem Vendor ID: 0000 00:18:32.756 Serial Number: 1aa2da167f47aec154a8 00:18:32.756 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:18:32.756 Firmware Version: 6.7.0-68 00:18:32.756 Recommended Arb Burst: 6 00:18:32.756 IEEE OUI Identifier: 00 00 00 00:18:32.756 Multi-path I/O 00:18:32.756 May have multiple subsystem ports: Yes 00:18:32.756 May have multiple controllers: Yes 00:18:32.756 Associated with SR-IOV VF: No 00:18:32.756 Max Data Transfer Size: Unlimited 00:18:32.756 Max Number of Namespaces: 1024 00:18:32.756 Max Number of I/O Queues: 128 00:18:32.756 NVMe Specification Version (VS): 1.3 00:18:32.756 NVMe Specification Version (Identify): 1.3 00:18:32.756 Maximum Queue Entries: 1024 00:18:32.756 Contiguous Queues Required: No 00:18:32.756 Arbitration Mechanisms Supported 00:18:32.756 Weighted Round Robin: Not Supported 00:18:32.756 Vendor Specific: Not Supported 00:18:32.756 Reset Timeout: 7500 ms 00:18:32.756 Doorbell Stride: 4 bytes 00:18:32.756 NVM Subsystem Reset: Not Supported 00:18:32.756 Command Sets Supported 00:18:32.756 NVM Command Set: Supported 00:18:32.756 Boot Partition: Not Supported 00:18:32.756 Memory Page Size Minimum: 4096 bytes 00:18:32.756 Memory Page Size Maximum: 4096 bytes 00:18:32.756 Persistent Memory Region: Not Supported 00:18:32.756 Optional Asynchronous Events Supported 00:18:32.756 Namespace Attribute Notices: Supported 00:18:32.756 Firmware Activation Notices: Not Supported 00:18:32.756 ANA Change Notices: Supported 00:18:32.756 PLE Aggregate Log Change Notices: Not Supported 00:18:32.756 LBA Status Info Alert Notices: Not Supported 00:18:32.756 EGE Aggregate Log Change Notices: Not Supported 00:18:32.756 Normal NVM Subsystem Shutdown event: Not Supported 00:18:32.756 Zone Descriptor Change Notices: Not Supported 00:18:32.756 Discovery Log Change Notices: Not Supported 00:18:32.756 Controller Attributes 00:18:32.756 128-bit Host Identifier: Supported 00:18:32.756 Non-Operational Permissive Mode: Not Supported 00:18:32.756 NVM Sets: Not Supported 00:18:32.756 Read Recovery Levels: Not Supported 00:18:32.756 Endurance Groups: Not Supported 00:18:32.756 Predictable Latency Mode: Not Supported 00:18:32.756 Traffic Based Keep ALive: Supported 00:18:32.756 Namespace Granularity: Not Supported 00:18:32.756 SQ Associations: Not Supported 00:18:32.756 UUID List: Not Supported 00:18:32.756 Multi-Domain Subsystem: Not Supported 00:18:32.756 Fixed Capacity Management: Not Supported 00:18:32.756 Variable Capacity Management: Not Supported 00:18:32.756 Delete Endurance Group: Not Supported 00:18:32.756 Delete NVM Set: Not Supported 00:18:32.756 Extended LBA Formats Supported: Not Supported 00:18:32.756 Flexible Data Placement Supported: Not Supported 00:18:32.756 00:18:32.756 Controller Memory Buffer Support 00:18:32.756 ================================ 00:18:32.756 Supported: No 00:18:32.756 00:18:32.756 Persistent Memory Region Support 00:18:32.756 ================================ 00:18:32.756 Supported: No 00:18:32.756 00:18:32.756 Admin Command Set Attributes 00:18:32.756 ============================ 00:18:32.756 Security Send/Receive: Not Supported 00:18:32.756 Format NVM: Not Supported 00:18:32.756 Firmware Activate/Download: Not Supported 00:18:32.756 Namespace Management: Not Supported 00:18:32.756 Device Self-Test: Not Supported 00:18:32.756 Directives: Not Supported 00:18:32.756 NVMe-MI: Not Supported 00:18:32.756 Virtualization Management: Not Supported 00:18:32.756 Doorbell Buffer Config: Not Supported 00:18:32.756 Get LBA Status Capability: Not Supported 00:18:32.756 Command & Feature Lockdown Capability: Not Supported 00:18:32.756 Abort Command Limit: 4 00:18:32.756 Async Event Request Limit: 4 00:18:32.756 Number of Firmware Slots: N/A 00:18:32.756 Firmware Slot 1 Read-Only: N/A 00:18:32.756 Firmware Activation Without Reset: N/A 00:18:32.756 Multiple Update Detection Support: N/A 00:18:32.756 Firmware Update Granularity: No Information Provided 00:18:32.756 Per-Namespace SMART Log: Yes 00:18:32.756 Asymmetric Namespace Access Log Page: Supported 00:18:32.756 ANA Transition Time : 10 sec 00:18:32.756 00:18:32.756 Asymmetric Namespace Access Capabilities 00:18:32.756 ANA Optimized State : Supported 00:18:32.756 ANA Non-Optimized State : Supported 00:18:32.756 ANA Inaccessible State : Supported 00:18:32.756 ANA Persistent Loss State : Supported 00:18:32.756 ANA Change State : Supported 00:18:32.756 ANAGRPID is not changed : No 00:18:32.756 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:18:32.756 00:18:32.756 ANA Group Identifier Maximum : 128 00:18:32.756 Number of ANA Group Identifiers : 128 00:18:32.756 Max Number of Allowed Namespaces : 1024 00:18:32.756 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:18:32.756 Command Effects Log Page: Supported 00:18:32.756 Get Log Page Extended Data: Supported 00:18:32.756 Telemetry Log Pages: Not Supported 00:18:32.756 Persistent Event Log Pages: Not Supported 00:18:32.756 Supported Log Pages Log Page: May Support 00:18:32.756 Commands Supported & Effects Log Page: Not Supported 00:18:32.756 Feature Identifiers & Effects Log Page:May Support 00:18:32.756 NVMe-MI Commands & Effects Log Page: May Support 00:18:32.756 Data Area 4 for Telemetry Log: Not Supported 00:18:32.756 Error Log Page Entries Supported: 128 00:18:32.756 Keep Alive: Supported 00:18:32.756 Keep Alive Granularity: 1000 ms 00:18:32.756 00:18:32.756 NVM Command Set Attributes 00:18:32.756 ========================== 00:18:32.756 Submission Queue Entry Size 00:18:32.756 Max: 64 00:18:32.756 Min: 64 00:18:32.756 Completion Queue Entry Size 00:18:32.756 Max: 16 00:18:32.756 Min: 16 00:18:32.756 Number of Namespaces: 1024 00:18:32.756 Compare Command: Not Supported 00:18:32.756 Write Uncorrectable Command: Not Supported 00:18:32.756 Dataset Management Command: Supported 00:18:32.756 Write Zeroes Command: Supported 00:18:32.756 Set Features Save Field: Not Supported 00:18:32.756 Reservations: Not Supported 00:18:32.756 Timestamp: Not Supported 00:18:32.756 Copy: Not Supported 00:18:32.756 Volatile Write Cache: Present 00:18:32.756 Atomic Write Unit (Normal): 1 00:18:32.756 Atomic Write Unit (PFail): 1 00:18:32.756 Atomic Compare & Write Unit: 1 00:18:32.756 Fused Compare & Write: Not Supported 00:18:32.756 Scatter-Gather List 00:18:32.756 SGL Command Set: Supported 00:18:32.756 SGL Keyed: Not Supported 00:18:32.756 SGL Bit Bucket Descriptor: Not Supported 00:18:32.756 SGL Metadata Pointer: Not Supported 00:18:32.756 Oversized SGL: Not Supported 00:18:32.756 SGL Metadata Address: Not Supported 00:18:32.756 SGL Offset: Supported 00:18:32.756 Transport SGL Data Block: Not Supported 00:18:32.756 Replay Protected Memory Block: Not Supported 00:18:32.756 00:18:32.756 Firmware Slot Information 00:18:32.756 ========================= 00:18:32.756 Active slot: 0 00:18:32.757 00:18:32.757 Asymmetric Namespace Access 00:18:32.757 =========================== 00:18:32.757 Change Count : 0 00:18:32.757 Number of ANA Group Descriptors : 1 00:18:32.757 ANA Group Descriptor : 0 00:18:32.757 ANA Group ID : 1 00:18:32.757 Number of NSID Values : 1 00:18:32.757 Change Count : 0 00:18:32.757 ANA State : 1 00:18:32.757 Namespace Identifier : 1 00:18:32.757 00:18:32.757 Commands Supported and Effects 00:18:32.757 ============================== 00:18:32.757 Admin Commands 00:18:32.757 -------------- 00:18:32.757 Get Log Page (02h): Supported 00:18:32.757 Identify (06h): Supported 00:18:32.757 Abort (08h): Supported 00:18:32.757 Set Features (09h): Supported 00:18:32.757 Get Features (0Ah): Supported 00:18:32.757 Asynchronous Event Request (0Ch): Supported 00:18:32.757 Keep Alive (18h): Supported 00:18:32.757 I/O Commands 00:18:32.757 ------------ 00:18:32.757 Flush (00h): Supported 00:18:32.757 Write (01h): Supported LBA-Change 00:18:32.757 Read (02h): Supported 00:18:32.757 Write Zeroes (08h): Supported LBA-Change 00:18:32.757 Dataset Management (09h): Supported 00:18:32.757 00:18:32.757 Error Log 00:18:32.757 ========= 00:18:32.757 Entry: 0 00:18:32.757 Error Count: 0x3 00:18:32.757 Submission Queue Id: 0x0 00:18:32.757 Command Id: 0x5 00:18:32.757 Phase Bit: 0 00:18:32.757 Status Code: 0x2 00:18:32.757 Status Code Type: 0x0 00:18:32.757 Do Not Retry: 1 00:18:32.757 Error Location: 0x28 00:18:32.757 LBA: 0x0 00:18:32.757 Namespace: 0x0 00:18:32.757 Vendor Log Page: 0x0 00:18:32.757 ----------- 00:18:32.757 Entry: 1 00:18:32.757 Error Count: 0x2 00:18:32.757 Submission Queue Id: 0x0 00:18:32.757 Command Id: 0x5 00:18:32.757 Phase Bit: 0 00:18:32.757 Status Code: 0x2 00:18:32.757 Status Code Type: 0x0 00:18:32.757 Do Not Retry: 1 00:18:32.757 Error Location: 0x28 00:18:32.757 LBA: 0x0 00:18:32.757 Namespace: 0x0 00:18:32.757 Vendor Log Page: 0x0 00:18:32.757 ----------- 00:18:32.757 Entry: 2 00:18:32.757 Error Count: 0x1 00:18:32.757 Submission Queue Id: 0x0 00:18:32.757 Command Id: 0x4 00:18:32.757 Phase Bit: 0 00:18:32.757 Status Code: 0x2 00:18:32.757 Status Code Type: 0x0 00:18:32.757 Do Not Retry: 1 00:18:32.757 Error Location: 0x28 00:18:32.757 LBA: 0x0 00:18:32.757 Namespace: 0x0 00:18:32.757 Vendor Log Page: 0x0 00:18:32.757 00:18:32.757 Number of Queues 00:18:32.757 ================ 00:18:32.757 Number of I/O Submission Queues: 128 00:18:32.757 Number of I/O Completion Queues: 128 00:18:32.757 00:18:32.757 ZNS Specific Controller Data 00:18:32.757 ============================ 00:18:32.757 Zone Append Size Limit: 0 00:18:32.757 00:18:32.757 00:18:32.757 Active Namespaces 00:18:32.757 ================= 00:18:32.757 get_feature(0x05) failed 00:18:32.757 Namespace ID:1 00:18:32.757 Command Set Identifier: NVM (00h) 00:18:32.757 Deallocate: Supported 00:18:32.757 Deallocated/Unwritten Error: Not Supported 00:18:32.757 Deallocated Read Value: Unknown 00:18:32.757 Deallocate in Write Zeroes: Not Supported 00:18:32.757 Deallocated Guard Field: 0xFFFF 00:18:32.757 Flush: Supported 00:18:32.757 Reservation: Not Supported 00:18:32.757 Namespace Sharing Capabilities: Multiple Controllers 00:18:32.757 Size (in LBAs): 1310720 (5GiB) 00:18:32.757 Capacity (in LBAs): 1310720 (5GiB) 00:18:32.757 Utilization (in LBAs): 1310720 (5GiB) 00:18:32.757 UUID: 24594e3a-d9a5-4fc2-94d5-7b2b93f73e3b 00:18:32.757 Thin Provisioning: Not Supported 00:18:32.757 Per-NS Atomic Units: Yes 00:18:32.757 Atomic Boundary Size (Normal): 0 00:18:32.757 Atomic Boundary Size (PFail): 0 00:18:32.757 Atomic Boundary Offset: 0 00:18:32.757 NGUID/EUI64 Never Reused: No 00:18:32.757 ANA group ID: 1 00:18:32.757 Namespace Write Protected: No 00:18:32.757 Number of LBA Formats: 1 00:18:32.757 Current LBA Format: LBA Format #00 00:18:32.757 LBA Format #00: Data Size: 4096 Metadata Size: 0 00:18:32.757 00:18:32.757 11:37:10 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:18:32.757 11:37:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:32.757 11:37:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # sync 00:18:33.016 11:37:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:33.016 11:37:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@120 -- # set +e 00:18:33.016 11:37:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:33.016 11:37:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:33.016 rmmod nvme_tcp 00:18:33.016 rmmod nvme_fabrics 00:18:33.016 11:37:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:33.016 11:37:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set -e 00:18:33.016 11:37:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # return 0 00:18:33.016 11:37:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:18:33.016 11:37:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:33.016 11:37:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:18:33.016 11:37:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:18:33.016 11:37:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:33.016 11:37:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:33.016 11:37:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:33.016 11:37:10 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:33.016 11:37:10 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:33.016 11:37:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:18:33.016 11:37:10 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:18:33.016 11:37:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:18:33.016 11:37:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # echo 0 00:18:33.016 11:37:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:18:33.016 11:37:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:18:33.016 11:37:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:18:33.016 11:37:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:18:33.016 11:37:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:18:33.016 11:37:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:18:33.016 11:37:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:18:33.582 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:18:33.582 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:18:33.840 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:18:33.840 ************************************ 00:18:33.840 END TEST nvmf_identify_kernel_target 00:18:33.840 ************************************ 00:18:33.840 00:18:33.840 real 0m2.647s 00:18:33.840 user 0m0.900s 00:18:33.840 sys 0m1.293s 00:18:33.840 11:37:11 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1124 -- # xtrace_disable 00:18:33.840 11:37:11 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:18:33.840 11:37:11 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:18:33.840 11:37:11 nvmf_tcp -- nvmf/nvmf.sh@105 -- # run_test nvmf_auth_host /home/vagrant/spdk_repo/spdk/test/nvmf/host/auth.sh --transport=tcp 00:18:33.840 11:37:11 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:18:33.840 11:37:11 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:33.840 11:37:11 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:18:33.840 ************************************ 00:18:33.840 START TEST nvmf_auth_host 00:18:33.840 ************************************ 00:18:33.840 11:37:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/auth.sh --transport=tcp 00:18:33.840 * Looking for test storage... 00:18:33.840 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:18:33.840 11:37:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:18:33.840 11:37:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:18:33.840 11:37:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:33.840 11:37:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:33.840 11:37:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:33.840 11:37:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:33.840 11:37:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:33.840 11:37:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:33.840 11:37:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:33.840 11:37:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:33.840 11:37:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:33.840 11:37:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:33.840 11:37:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:891080d4-f96c-4735-b9e2-e3ce9892e421 00:18:33.840 11:37:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=891080d4-f96c-4735-b9e2-e3ce9892e421 00:18:33.840 11:37:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:33.840 11:37:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:33.840 11:37:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:18:33.840 11:37:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:33.840 11:37:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:33.840 11:37:11 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:33.840 11:37:11 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:33.840 11:37:11 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:33.840 11:37:11 nvmf_tcp.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:33.840 11:37:11 nvmf_tcp.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:33.840 11:37:11 nvmf_tcp.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:33.840 11:37:11 nvmf_tcp.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:18:33.840 11:37:11 nvmf_tcp.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:33.840 11:37:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@47 -- # : 0 00:18:33.840 11:37:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:33.840 11:37:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:33.840 11:37:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:33.840 11:37:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:33.840 11:37:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:33.840 11:37:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:33.841 11:37:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:33.841 11:37:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:33.841 11:37:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:18:33.841 11:37:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:18:33.841 11:37:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:18:33.841 11:37:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:18:33.841 11:37:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:18:33.841 11:37:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:18:33.841 11:37:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:18:33.841 11:37:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:18:33.841 11:37:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:18:33.841 11:37:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:18:33.841 11:37:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:33.841 11:37:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:18:33.841 11:37:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:18:33.841 11:37:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:18:33.841 11:37:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:33.841 11:37:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:33.841 11:37:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:33.841 11:37:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:18:33.841 11:37:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:18:33.841 11:37:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:18:33.841 11:37:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:18:33.841 11:37:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:18:33.841 11:37:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@432 -- # nvmf_veth_init 00:18:33.841 11:37:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:33.841 11:37:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:33.841 11:37:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:18:33.841 11:37:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:18:33.841 11:37:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:18:33.841 11:37:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:18:33.841 11:37:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:18:33.841 11:37:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:33.841 11:37:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:18:33.841 11:37:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:18:33.841 11:37:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:18:33.841 11:37:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:18:33.841 11:37:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:18:33.841 11:37:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:18:33.841 Cannot find device "nvmf_tgt_br" 00:18:33.841 11:37:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@155 -- # true 00:18:33.841 11:37:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:18:34.101 Cannot find device "nvmf_tgt_br2" 00:18:34.101 11:37:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@156 -- # true 00:18:34.101 11:37:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:18:34.101 11:37:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:18:34.101 Cannot find device "nvmf_tgt_br" 00:18:34.101 11:37:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@158 -- # true 00:18:34.101 11:37:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:18:34.101 Cannot find device "nvmf_tgt_br2" 00:18:34.101 11:37:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@159 -- # true 00:18:34.101 11:37:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:18:34.101 11:37:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:18:34.101 11:37:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:34.101 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:34.101 11:37:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@162 -- # true 00:18:34.101 11:37:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:34.101 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:34.101 11:37:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@163 -- # true 00:18:34.101 11:37:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:18:34.101 11:37:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:18:34.101 11:37:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:18:34.101 11:37:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:18:34.101 11:37:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:18:34.101 11:37:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:18:34.101 11:37:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:18:34.101 11:37:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:18:34.101 11:37:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:18:34.101 11:37:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:18:34.101 11:37:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:18:34.101 11:37:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:18:34.101 11:37:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:18:34.101 11:37:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:18:34.101 11:37:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:18:34.101 11:37:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:18:34.101 11:37:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:18:34.101 11:37:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:18:34.101 11:37:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:18:34.101 11:37:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:18:34.101 11:37:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:18:34.359 11:37:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:18:34.359 11:37:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:18:34.359 11:37:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:18:34.359 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:34.359 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.101 ms 00:18:34.359 00:18:34.359 --- 10.0.0.2 ping statistics --- 00:18:34.359 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:34.359 rtt min/avg/max/mdev = 0.101/0.101/0.101/0.000 ms 00:18:34.359 11:37:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:18:34.359 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:18:34.359 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.054 ms 00:18:34.359 00:18:34.359 --- 10.0.0.3 ping statistics --- 00:18:34.359 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:34.359 rtt min/avg/max/mdev = 0.054/0.054/0.054/0.000 ms 00:18:34.359 11:37:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:18:34.359 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:34.359 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.035 ms 00:18:34.359 00:18:34.359 --- 10.0.0.1 ping statistics --- 00:18:34.359 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:34.359 rtt min/avg/max/mdev = 0.035/0.035/0.035/0.000 ms 00:18:34.359 11:37:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:34.359 11:37:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@433 -- # return 0 00:18:34.359 11:37:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:18:34.359 11:37:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:34.359 11:37:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:18:34.359 11:37:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:18:34.359 11:37:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:34.359 11:37:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:18:34.359 11:37:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:18:34.359 11:37:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:18:34.359 11:37:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:34.359 11:37:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@722 -- # xtrace_disable 00:18:34.359 11:37:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:34.359 11:37:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@481 -- # nvmfpid=91523 00:18:34.359 11:37:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@482 -- # waitforlisten 91523 00:18:34.359 11:37:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:18:34.359 11:37:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@829 -- # '[' -z 91523 ']' 00:18:34.359 11:37:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:34.359 11:37:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:34.359 11:37:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:34.359 11:37:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:34.359 11:37:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:35.293 11:37:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:35.293 11:37:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@862 -- # return 0 00:18:35.293 11:37:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:35.293 11:37:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@728 -- # xtrace_disable 00:18:35.293 11:37:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:35.293 11:37:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:35.293 11:37:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:18:35.293 11:37:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:18:35.293 11:37:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:18:35.293 11:37:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:35.293 11:37:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:18:35.293 11:37:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:18:35.293 11:37:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:18:35.293 11:37:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:18:35.293 11:37:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=959821bf9b8227547ffb12c004e5fbfd 00:18:35.293 11:37:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:18:35.293 11:37:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.CAd 00:18:35.293 11:37:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 959821bf9b8227547ffb12c004e5fbfd 0 00:18:35.293 11:37:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 959821bf9b8227547ffb12c004e5fbfd 0 00:18:35.293 11:37:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:18:35.293 11:37:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:18:35.293 11:37:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=959821bf9b8227547ffb12c004e5fbfd 00:18:35.293 11:37:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:18:35.293 11:37:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:18:35.552 11:37:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.CAd 00:18:35.552 11:37:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.CAd 00:18:35.552 11:37:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.CAd 00:18:35.552 11:37:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:18:35.552 11:37:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:18:35.552 11:37:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:35.552 11:37:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:18:35.552 11:37:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:18:35.552 11:37:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:18:35.552 11:37:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:18:35.552 11:37:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=f67e3bc008e1fc17c49131133dd5eee39776eaf470742a68451ac45e8320ded3 00:18:35.552 11:37:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:18:35.552 11:37:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.7hs 00:18:35.552 11:37:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key f67e3bc008e1fc17c49131133dd5eee39776eaf470742a68451ac45e8320ded3 3 00:18:35.552 11:37:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 f67e3bc008e1fc17c49131133dd5eee39776eaf470742a68451ac45e8320ded3 3 00:18:35.552 11:37:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:18:35.552 11:37:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:18:35.552 11:37:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=f67e3bc008e1fc17c49131133dd5eee39776eaf470742a68451ac45e8320ded3 00:18:35.552 11:37:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:18:35.552 11:37:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:18:35.552 11:37:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.7hs 00:18:35.552 11:37:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.7hs 00:18:35.552 11:37:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.7hs 00:18:35.552 11:37:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:18:35.552 11:37:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:18:35.552 11:37:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:35.552 11:37:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:18:35.552 11:37:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:18:35.552 11:37:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:18:35.552 11:37:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:18:35.552 11:37:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=2f84746585ce1cd3087658625cc58ebbb6ef245e5f23954f 00:18:35.552 11:37:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:18:35.552 11:37:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.Cm8 00:18:35.552 11:37:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 2f84746585ce1cd3087658625cc58ebbb6ef245e5f23954f 0 00:18:35.552 11:37:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 2f84746585ce1cd3087658625cc58ebbb6ef245e5f23954f 0 00:18:35.552 11:37:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:18:35.552 11:37:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:18:35.552 11:37:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=2f84746585ce1cd3087658625cc58ebbb6ef245e5f23954f 00:18:35.552 11:37:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:18:35.552 11:37:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:18:35.552 11:37:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.Cm8 00:18:35.552 11:37:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.Cm8 00:18:35.552 11:37:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.Cm8 00:18:35.552 11:37:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:18:35.552 11:37:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:18:35.552 11:37:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:35.552 11:37:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:18:35.552 11:37:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:18:35.552 11:37:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:18:35.552 11:37:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:18:35.552 11:37:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=ad2026f121a5a12d8041e89cf4fff442f198846a165eb3b4 00:18:35.552 11:37:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:18:35.552 11:37:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.rsy 00:18:35.552 11:37:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key ad2026f121a5a12d8041e89cf4fff442f198846a165eb3b4 2 00:18:35.552 11:37:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 ad2026f121a5a12d8041e89cf4fff442f198846a165eb3b4 2 00:18:35.552 11:37:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:18:35.552 11:37:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:18:35.552 11:37:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=ad2026f121a5a12d8041e89cf4fff442f198846a165eb3b4 00:18:35.552 11:37:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:18:35.552 11:37:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:18:35.552 11:37:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.rsy 00:18:35.552 11:37:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.rsy 00:18:35.552 11:37:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.rsy 00:18:35.552 11:37:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:18:35.552 11:37:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:18:35.552 11:37:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:35.552 11:37:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:18:35.552 11:37:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:18:35.552 11:37:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:18:35.553 11:37:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:18:35.553 11:37:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=b1f20b695315d982be9e53bd7f24a13a 00:18:35.553 11:37:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:18:35.553 11:37:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.zC9 00:18:35.553 11:37:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key b1f20b695315d982be9e53bd7f24a13a 1 00:18:35.553 11:37:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 b1f20b695315d982be9e53bd7f24a13a 1 00:18:35.553 11:37:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:18:35.553 11:37:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:18:35.553 11:37:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=b1f20b695315d982be9e53bd7f24a13a 00:18:35.553 11:37:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:18:35.553 11:37:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:18:35.812 11:37:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.zC9 00:18:35.812 11:37:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.zC9 00:18:35.812 11:37:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.zC9 00:18:35.812 11:37:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:18:35.812 11:37:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:18:35.812 11:37:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:35.812 11:37:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:18:35.812 11:37:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:18:35.812 11:37:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:18:35.812 11:37:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:18:35.812 11:37:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=5808624a10a1e24df288b865d81d50b6 00:18:35.812 11:37:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:18:35.812 11:37:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.HBT 00:18:35.812 11:37:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 5808624a10a1e24df288b865d81d50b6 1 00:18:35.812 11:37:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 5808624a10a1e24df288b865d81d50b6 1 00:18:35.812 11:37:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:18:35.812 11:37:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:18:35.812 11:37:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=5808624a10a1e24df288b865d81d50b6 00:18:35.812 11:37:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:18:35.812 11:37:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:18:35.812 11:37:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.HBT 00:18:35.812 11:37:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.HBT 00:18:35.812 11:37:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.HBT 00:18:35.812 11:37:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:18:35.812 11:37:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:18:35.812 11:37:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:35.812 11:37:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:18:35.812 11:37:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:18:35.812 11:37:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:18:35.812 11:37:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:18:35.812 11:37:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=e583c085bc94089a41dbc2aa199c2440e9d0d00b108f1c74 00:18:35.812 11:37:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:18:35.812 11:37:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.Qqe 00:18:35.812 11:37:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key e583c085bc94089a41dbc2aa199c2440e9d0d00b108f1c74 2 00:18:35.812 11:37:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 e583c085bc94089a41dbc2aa199c2440e9d0d00b108f1c74 2 00:18:35.812 11:37:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:18:35.812 11:37:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:18:35.812 11:37:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=e583c085bc94089a41dbc2aa199c2440e9d0d00b108f1c74 00:18:35.812 11:37:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:18:35.812 11:37:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:18:35.812 11:37:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.Qqe 00:18:35.812 11:37:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.Qqe 00:18:35.812 11:37:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.Qqe 00:18:35.812 11:37:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:18:35.812 11:37:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:18:35.812 11:37:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:35.812 11:37:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:18:35.812 11:37:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:18:35.812 11:37:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:18:35.812 11:37:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:18:35.812 11:37:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=e36789ef8b1d201352efe394c5ddb1ab 00:18:35.812 11:37:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:18:35.812 11:37:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.JbS 00:18:35.812 11:37:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key e36789ef8b1d201352efe394c5ddb1ab 0 00:18:35.812 11:37:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 e36789ef8b1d201352efe394c5ddb1ab 0 00:18:35.812 11:37:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:18:35.812 11:37:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:18:35.812 11:37:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=e36789ef8b1d201352efe394c5ddb1ab 00:18:35.812 11:37:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:18:35.812 11:37:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:18:35.812 11:37:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.JbS 00:18:35.812 11:37:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.JbS 00:18:35.812 11:37:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.JbS 00:18:35.812 11:37:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:18:35.812 11:37:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:18:35.812 11:37:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:35.812 11:37:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:18:35.812 11:37:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:18:35.812 11:37:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:18:35.812 11:37:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:18:35.812 11:37:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=5f214c2cdbe5bd9ddecfb3c17d909bc73cf8216da6fc21f75cc355670bca0c14 00:18:35.812 11:37:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:18:35.812 11:37:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.CS7 00:18:35.812 11:37:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 5f214c2cdbe5bd9ddecfb3c17d909bc73cf8216da6fc21f75cc355670bca0c14 3 00:18:35.812 11:37:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 5f214c2cdbe5bd9ddecfb3c17d909bc73cf8216da6fc21f75cc355670bca0c14 3 00:18:35.812 11:37:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:18:35.812 11:37:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:18:35.812 11:37:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=5f214c2cdbe5bd9ddecfb3c17d909bc73cf8216da6fc21f75cc355670bca0c14 00:18:35.812 11:37:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:18:35.812 11:37:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:18:36.071 11:37:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.CS7 00:18:36.071 11:37:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.CS7 00:18:36.071 11:37:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.CS7 00:18:36.071 11:37:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:18:36.071 11:37:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 91523 00:18:36.071 11:37:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@829 -- # '[' -z 91523 ']' 00:18:36.071 11:37:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:36.071 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:36.071 11:37:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:36.071 11:37:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:36.071 11:37:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:36.071 11:37:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:36.329 11:37:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:36.330 11:37:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@862 -- # return 0 00:18:36.330 11:37:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:18:36.330 11:37:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.CAd 00:18:36.330 11:37:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:36.330 11:37:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:36.330 11:37:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:36.330 11:37:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.7hs ]] 00:18:36.330 11:37:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.7hs 00:18:36.330 11:37:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:36.330 11:37:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:36.330 11:37:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:36.330 11:37:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:18:36.330 11:37:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.Cm8 00:18:36.330 11:37:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:36.330 11:37:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:36.330 11:37:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:36.330 11:37:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.rsy ]] 00:18:36.330 11:37:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.rsy 00:18:36.330 11:37:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:36.330 11:37:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:36.330 11:37:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:36.330 11:37:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:18:36.330 11:37:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.zC9 00:18:36.330 11:37:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:36.330 11:37:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:36.330 11:37:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:36.330 11:37:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.HBT ]] 00:18:36.330 11:37:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.HBT 00:18:36.330 11:37:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:36.330 11:37:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:36.330 11:37:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:36.330 11:37:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:18:36.330 11:37:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.Qqe 00:18:36.330 11:37:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:36.330 11:37:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:36.330 11:37:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:36.330 11:37:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.JbS ]] 00:18:36.330 11:37:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.JbS 00:18:36.330 11:37:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:36.330 11:37:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:36.330 11:37:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:36.330 11:37:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:18:36.330 11:37:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.CS7 00:18:36.330 11:37:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:36.330 11:37:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:36.330 11:37:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:36.330 11:37:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:18:36.330 11:37:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:18:36.330 11:37:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:18:36.330 11:37:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:36.330 11:37:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:36.330 11:37:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:36.330 11:37:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:36.330 11:37:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:36.330 11:37:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:36.330 11:37:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:36.330 11:37:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:36.330 11:37:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:36.330 11:37:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:36.330 11:37:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:18:36.330 11:37:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@632 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:18:36.330 11:37:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:18:36.330 11:37:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:18:36.330 11:37:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:18:36.330 11:37:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:18:36.330 11:37:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@639 -- # local block nvme 00:18:36.330 11:37:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:18:36.330 11:37:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@642 -- # modprobe nvmet 00:18:36.330 11:37:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:18:36.330 11:37:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@647 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:18:36.589 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:18:36.589 Waiting for block devices as requested 00:18:36.846 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:18:36.846 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:18:37.411 11:37:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:18:37.411 11:37:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:18:37.411 11:37:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:18:37.411 11:37:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:18:37.411 11:37:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:18:37.411 11:37:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:18:37.411 11:37:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:18:37.411 11:37:14 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:18:37.411 11:37:14 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:18:37.411 No valid GPT data, bailing 00:18:37.411 11:37:14 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:18:37.411 11:37:14 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # pt= 00:18:37.411 11:37:14 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@392 -- # return 1 00:18:37.411 11:37:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:18:37.411 11:37:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:18:37.411 11:37:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n2 ]] 00:18:37.411 11:37:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@652 -- # is_block_zoned nvme0n2 00:18:37.411 11:37:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1662 -- # local device=nvme0n2 00:18:37.411 11:37:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:18:37.411 11:37:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:18:37.411 11:37:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # block_in_use nvme0n2 00:18:37.411 11:37:14 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@378 -- # local block=nvme0n2 pt 00:18:37.411 11:37:14 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:18:37.411 No valid GPT data, bailing 00:18:37.411 11:37:14 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:18:37.411 11:37:14 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # pt= 00:18:37.411 11:37:14 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@392 -- # return 1 00:18:37.411 11:37:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n2 00:18:37.411 11:37:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:18:37.411 11:37:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n3 ]] 00:18:37.411 11:37:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@652 -- # is_block_zoned nvme0n3 00:18:37.411 11:37:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1662 -- # local device=nvme0n3 00:18:37.411 11:37:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:18:37.411 11:37:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:18:37.411 11:37:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # block_in_use nvme0n3 00:18:37.411 11:37:14 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@378 -- # local block=nvme0n3 pt 00:18:37.411 11:37:14 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:18:37.669 No valid GPT data, bailing 00:18:37.669 11:37:14 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:18:37.669 11:37:14 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # pt= 00:18:37.669 11:37:14 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@392 -- # return 1 00:18:37.669 11:37:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n3 00:18:37.669 11:37:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:18:37.669 11:37:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme1n1 ]] 00:18:37.669 11:37:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@652 -- # is_block_zoned nvme1n1 00:18:37.669 11:37:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1662 -- # local device=nvme1n1 00:18:37.669 11:37:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:18:37.669 11:37:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:18:37.669 11:37:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # block_in_use nvme1n1 00:18:37.669 11:37:14 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@378 -- # local block=nvme1n1 pt 00:18:37.669 11:37:14 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:18:37.669 No valid GPT data, bailing 00:18:37.669 11:37:14 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:18:37.669 11:37:14 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # pt= 00:18:37.669 11:37:14 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@392 -- # return 1 00:18:37.669 11:37:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # nvme=/dev/nvme1n1 00:18:37.669 11:37:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@656 -- # [[ -b /dev/nvme1n1 ]] 00:18:37.669 11:37:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:18:37.669 11:37:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:18:37.669 11:37:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:18:37.669 11:37:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@665 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:18:37.669 11:37:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@667 -- # echo 1 00:18:37.669 11:37:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@668 -- # echo /dev/nvme1n1 00:18:37.669 11:37:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@669 -- # echo 1 00:18:37.669 11:37:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:18:37.669 11:37:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@672 -- # echo tcp 00:18:37.669 11:37:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@673 -- # echo 4420 00:18:37.669 11:37:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@674 -- # echo ipv4 00:18:37.669 11:37:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:18:37.669 11:37:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:891080d4-f96c-4735-b9e2-e3ce9892e421 --hostid=891080d4-f96c-4735-b9e2-e3ce9892e421 -a 10.0.0.1 -t tcp -s 4420 00:18:37.669 00:18:37.669 Discovery Log Number of Records 2, Generation counter 2 00:18:37.669 =====Discovery Log Entry 0====== 00:18:37.669 trtype: tcp 00:18:37.669 adrfam: ipv4 00:18:37.669 subtype: current discovery subsystem 00:18:37.669 treq: not specified, sq flow control disable supported 00:18:37.669 portid: 1 00:18:37.669 trsvcid: 4420 00:18:37.669 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:18:37.669 traddr: 10.0.0.1 00:18:37.669 eflags: none 00:18:37.669 sectype: none 00:18:37.669 =====Discovery Log Entry 1====== 00:18:37.669 trtype: tcp 00:18:37.669 adrfam: ipv4 00:18:37.669 subtype: nvme subsystem 00:18:37.669 treq: not specified, sq flow control disable supported 00:18:37.669 portid: 1 00:18:37.669 trsvcid: 4420 00:18:37.669 subnqn: nqn.2024-02.io.spdk:cnode0 00:18:37.669 traddr: 10.0.0.1 00:18:37.669 eflags: none 00:18:37.669 sectype: none 00:18:37.669 11:37:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:18:37.669 11:37:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:18:37.669 11:37:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:18:37.669 11:37:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:18:37.669 11:37:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:37.669 11:37:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:18:37.669 11:37:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:18:37.669 11:37:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:18:37.669 11:37:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmY4NDc0NjU4NWNlMWNkMzA4NzY1ODYyNWNjNThlYmJiNmVmMjQ1ZTVmMjM5NTRmpsXYMw==: 00:18:37.669 11:37:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YWQyMDI2ZjEyMWE1YTEyZDgwNDFlODljZjRmZmY0NDJmMTk4ODQ2YTE2NWViM2I0QEy44g==: 00:18:37.669 11:37:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:18:37.669 11:37:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:18:37.669 11:37:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmY4NDc0NjU4NWNlMWNkMzA4NzY1ODYyNWNjNThlYmJiNmVmMjQ1ZTVmMjM5NTRmpsXYMw==: 00:18:37.669 11:37:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YWQyMDI2ZjEyMWE1YTEyZDgwNDFlODljZjRmZmY0NDJmMTk4ODQ2YTE2NWViM2I0QEy44g==: ]] 00:18:37.669 11:37:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YWQyMDI2ZjEyMWE1YTEyZDgwNDFlODljZjRmZmY0NDJmMTk4ODQ2YTE2NWViM2I0QEy44g==: 00:18:37.669 11:37:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:18:37.669 11:37:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:18:37.669 11:37:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:18:37.669 11:37:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:37.669 11:37:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:18:37.669 11:37:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:37.669 11:37:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:18:37.669 11:37:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:37.669 11:37:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:18:37.669 11:37:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:37.669 11:37:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:37.669 11:37:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:37.669 11:37:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:37.927 11:37:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:37.927 11:37:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:37.927 11:37:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:37.927 11:37:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:37.927 11:37:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:37.927 11:37:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:37.927 11:37:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:37.927 11:37:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:37.927 11:37:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:37.927 11:37:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:37.927 11:37:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:37.927 11:37:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:37.927 11:37:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:37.927 11:37:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:37.927 11:37:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:37.927 nvme0n1 00:18:37.927 11:37:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:37.927 11:37:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:37.927 11:37:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:37.927 11:37:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:37.927 11:37:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:37.927 11:37:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:37.927 11:37:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:37.927 11:37:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:37.927 11:37:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:37.927 11:37:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:37.927 11:37:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:37.927 11:37:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:18:37.927 11:37:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:18:37.927 11:37:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:37.927 11:37:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:18:37.927 11:37:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:37.927 11:37:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:18:37.927 11:37:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:18:37.927 11:37:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:18:37.927 11:37:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTU5ODIxYmY5YjgyMjc1NDdmZmIxMmMwMDRlNWZiZmQJ6vNU: 00:18:37.927 11:37:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZjY3ZTNiYzAwOGUxZmMxN2M0OTEzMTEzM2RkNWVlZTM5Nzc2ZWFmNDcwNzQyYTY4NDUxYWM0NWU4MzIwZGVkM8EpFWk=: 00:18:37.927 11:37:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:18:37.927 11:37:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:18:37.927 11:37:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTU5ODIxYmY5YjgyMjc1NDdmZmIxMmMwMDRlNWZiZmQJ6vNU: 00:18:37.927 11:37:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZjY3ZTNiYzAwOGUxZmMxN2M0OTEzMTEzM2RkNWVlZTM5Nzc2ZWFmNDcwNzQyYTY4NDUxYWM0NWU4MzIwZGVkM8EpFWk=: ]] 00:18:37.927 11:37:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZjY3ZTNiYzAwOGUxZmMxN2M0OTEzMTEzM2RkNWVlZTM5Nzc2ZWFmNDcwNzQyYTY4NDUxYWM0NWU4MzIwZGVkM8EpFWk=: 00:18:37.927 11:37:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:18:37.927 11:37:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:37.927 11:37:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:18:37.927 11:37:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:18:37.927 11:37:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:18:37.927 11:37:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:37.927 11:37:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:37.927 11:37:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:37.927 11:37:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:37.927 11:37:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:37.927 11:37:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:37.927 11:37:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:37.927 11:37:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:37.927 11:37:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:37.927 11:37:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:37.927 11:37:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:37.927 11:37:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:37.927 11:37:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:37.927 11:37:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:37.927 11:37:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:37.927 11:37:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:37.927 11:37:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:37.927 11:37:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:37.927 11:37:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:38.186 nvme0n1 00:18:38.186 11:37:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:38.186 11:37:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:38.186 11:37:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:38.186 11:37:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:38.186 11:37:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:38.186 11:37:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:38.186 11:37:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:38.186 11:37:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:38.186 11:37:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:38.186 11:37:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:38.186 11:37:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:38.186 11:37:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:38.186 11:37:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:18:38.186 11:37:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:38.186 11:37:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:18:38.186 11:37:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:18:38.186 11:37:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:18:38.186 11:37:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmY4NDc0NjU4NWNlMWNkMzA4NzY1ODYyNWNjNThlYmJiNmVmMjQ1ZTVmMjM5NTRmpsXYMw==: 00:18:38.186 11:37:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YWQyMDI2ZjEyMWE1YTEyZDgwNDFlODljZjRmZmY0NDJmMTk4ODQ2YTE2NWViM2I0QEy44g==: 00:18:38.186 11:37:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:18:38.186 11:37:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:18:38.186 11:37:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmY4NDc0NjU4NWNlMWNkMzA4NzY1ODYyNWNjNThlYmJiNmVmMjQ1ZTVmMjM5NTRmpsXYMw==: 00:18:38.186 11:37:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YWQyMDI2ZjEyMWE1YTEyZDgwNDFlODljZjRmZmY0NDJmMTk4ODQ2YTE2NWViM2I0QEy44g==: ]] 00:18:38.186 11:37:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YWQyMDI2ZjEyMWE1YTEyZDgwNDFlODljZjRmZmY0NDJmMTk4ODQ2YTE2NWViM2I0QEy44g==: 00:18:38.186 11:37:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:18:38.186 11:37:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:38.186 11:37:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:18:38.186 11:37:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:18:38.186 11:37:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:18:38.186 11:37:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:38.186 11:37:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:38.186 11:37:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:38.186 11:37:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:38.186 11:37:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:38.186 11:37:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:38.186 11:37:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:38.186 11:37:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:38.186 11:37:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:38.186 11:37:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:38.186 11:37:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:38.186 11:37:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:38.186 11:37:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:38.187 11:37:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:38.187 11:37:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:38.187 11:37:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:38.187 11:37:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:38.187 11:37:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:38.187 11:37:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:38.187 nvme0n1 00:18:38.187 11:37:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:38.187 11:37:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:38.187 11:37:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:38.187 11:37:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:38.187 11:37:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:38.187 11:37:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:38.446 11:37:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:38.446 11:37:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:38.446 11:37:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:38.446 11:37:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:38.446 11:37:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:38.446 11:37:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:38.446 11:37:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:18:38.446 11:37:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:38.446 11:37:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:18:38.446 11:37:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:18:38.446 11:37:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:18:38.446 11:37:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YjFmMjBiNjk1MzE1ZDk4MmJlOWU1M2JkN2YyNGExM2HwY52g: 00:18:38.446 11:37:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NTgwODYyNGExMGExZTI0ZGYyODhiODY1ZDgxZDUwYjYjGHta: 00:18:38.446 11:37:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:18:38.446 11:37:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:18:38.446 11:37:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YjFmMjBiNjk1MzE1ZDk4MmJlOWU1M2JkN2YyNGExM2HwY52g: 00:18:38.446 11:37:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NTgwODYyNGExMGExZTI0ZGYyODhiODY1ZDgxZDUwYjYjGHta: ]] 00:18:38.446 11:37:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NTgwODYyNGExMGExZTI0ZGYyODhiODY1ZDgxZDUwYjYjGHta: 00:18:38.446 11:37:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:18:38.446 11:37:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:38.446 11:37:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:18:38.446 11:37:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:18:38.446 11:37:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:18:38.446 11:37:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:38.446 11:37:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:38.446 11:37:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:38.446 11:37:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:38.446 11:37:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:38.446 11:37:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:38.446 11:37:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:38.446 11:37:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:38.446 11:37:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:38.446 11:37:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:38.446 11:37:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:38.446 11:37:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:38.446 11:37:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:38.446 11:37:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:38.446 11:37:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:38.446 11:37:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:38.446 11:37:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:38.446 11:37:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:38.446 11:37:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:38.446 nvme0n1 00:18:38.446 11:37:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:38.446 11:37:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:38.446 11:37:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:38.446 11:37:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:38.446 11:37:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:38.446 11:37:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:38.446 11:37:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:38.446 11:37:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:38.446 11:37:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:38.446 11:37:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:38.446 11:37:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:38.446 11:37:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:38.446 11:37:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:18:38.446 11:37:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:38.446 11:37:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:18:38.446 11:37:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:18:38.446 11:37:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:18:38.446 11:37:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZTU4M2MwODViYzk0MDg5YTQxZGJjMmFhMTk5YzI0NDBlOWQwZDAwYjEwOGYxYzc0ff6NVg==: 00:18:38.446 11:37:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZTM2Nzg5ZWY4YjFkMjAxMzUyZWZlMzk0YzVkZGIxYWL8FueM: 00:18:38.446 11:37:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:18:38.446 11:37:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:18:38.446 11:37:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZTU4M2MwODViYzk0MDg5YTQxZGJjMmFhMTk5YzI0NDBlOWQwZDAwYjEwOGYxYzc0ff6NVg==: 00:18:38.446 11:37:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZTM2Nzg5ZWY4YjFkMjAxMzUyZWZlMzk0YzVkZGIxYWL8FueM: ]] 00:18:38.446 11:37:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZTM2Nzg5ZWY4YjFkMjAxMzUyZWZlMzk0YzVkZGIxYWL8FueM: 00:18:38.446 11:37:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:18:38.446 11:37:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:38.446 11:37:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:18:38.446 11:37:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:18:38.446 11:37:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:18:38.446 11:37:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:38.446 11:37:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:38.446 11:37:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:38.446 11:37:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:38.446 11:37:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:38.446 11:37:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:38.446 11:37:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:38.446 11:37:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:38.446 11:37:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:38.446 11:37:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:38.446 11:37:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:38.446 11:37:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:38.446 11:37:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:38.446 11:37:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:38.446 11:37:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:38.446 11:37:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:38.446 11:37:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:18:38.446 11:37:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:38.446 11:37:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:38.704 nvme0n1 00:18:38.704 11:37:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:38.704 11:37:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:38.704 11:37:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:38.704 11:37:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:38.704 11:37:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:38.704 11:37:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:38.704 11:37:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:38.704 11:37:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:38.704 11:37:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:38.704 11:37:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:38.704 11:37:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:38.704 11:37:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:38.704 11:37:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:18:38.704 11:37:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:38.704 11:37:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:18:38.704 11:37:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:18:38.704 11:37:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:18:38.704 11:37:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NWYyMTRjMmNkYmU1YmQ5ZGRlY2ZiM2MxN2Q5MDliYzczY2Y4MjE2ZGE2ZmMyMWY3NWNjMzU1NjcwYmNhMGMxNDRIF/Y=: 00:18:38.704 11:37:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:18:38.704 11:37:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:18:38.704 11:37:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:18:38.704 11:37:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NWYyMTRjMmNkYmU1YmQ5ZGRlY2ZiM2MxN2Q5MDliYzczY2Y4MjE2ZGE2ZmMyMWY3NWNjMzU1NjcwYmNhMGMxNDRIF/Y=: 00:18:38.704 11:37:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:18:38.704 11:37:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:18:38.704 11:37:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:38.704 11:37:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:18:38.704 11:37:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:18:38.704 11:37:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:18:38.704 11:37:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:38.704 11:37:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:38.704 11:37:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:38.704 11:37:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:38.704 11:37:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:38.704 11:37:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:38.704 11:37:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:38.704 11:37:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:38.704 11:37:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:38.704 11:37:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:38.704 11:37:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:38.704 11:37:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:38.704 11:37:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:38.704 11:37:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:38.704 11:37:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:38.704 11:37:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:38.704 11:37:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:18:38.704 11:37:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:38.704 11:37:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:38.704 nvme0n1 00:18:38.704 11:37:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:38.704 11:37:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:38.704 11:37:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:38.704 11:37:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:38.704 11:37:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:38.962 11:37:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:38.962 11:37:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:38.962 11:37:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:38.962 11:37:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:38.962 11:37:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:38.962 11:37:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:38.962 11:37:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:18:38.962 11:37:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:38.962 11:37:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:18:38.962 11:37:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:38.962 11:37:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:18:38.962 11:37:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:18:38.962 11:37:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:18:38.962 11:37:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTU5ODIxYmY5YjgyMjc1NDdmZmIxMmMwMDRlNWZiZmQJ6vNU: 00:18:38.962 11:37:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZjY3ZTNiYzAwOGUxZmMxN2M0OTEzMTEzM2RkNWVlZTM5Nzc2ZWFmNDcwNzQyYTY4NDUxYWM0NWU4MzIwZGVkM8EpFWk=: 00:18:38.962 11:37:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:18:38.962 11:37:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:18:39.221 11:37:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTU5ODIxYmY5YjgyMjc1NDdmZmIxMmMwMDRlNWZiZmQJ6vNU: 00:18:39.221 11:37:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZjY3ZTNiYzAwOGUxZmMxN2M0OTEzMTEzM2RkNWVlZTM5Nzc2ZWFmNDcwNzQyYTY4NDUxYWM0NWU4MzIwZGVkM8EpFWk=: ]] 00:18:39.221 11:37:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZjY3ZTNiYzAwOGUxZmMxN2M0OTEzMTEzM2RkNWVlZTM5Nzc2ZWFmNDcwNzQyYTY4NDUxYWM0NWU4MzIwZGVkM8EpFWk=: 00:18:39.221 11:37:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:18:39.221 11:37:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:39.221 11:37:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:18:39.221 11:37:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:18:39.221 11:37:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:18:39.221 11:37:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:39.221 11:37:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:39.221 11:37:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:39.221 11:37:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:39.221 11:37:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:39.221 11:37:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:39.221 11:37:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:39.221 11:37:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:39.221 11:37:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:39.221 11:37:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:39.221 11:37:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:39.221 11:37:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:39.221 11:37:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:39.221 11:37:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:39.221 11:37:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:39.221 11:37:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:39.221 11:37:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:39.221 11:37:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:39.221 11:37:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:39.221 nvme0n1 00:18:39.221 11:37:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:39.221 11:37:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:39.221 11:37:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:39.221 11:37:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:39.221 11:37:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:39.221 11:37:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:39.221 11:37:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:39.221 11:37:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:39.221 11:37:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:39.221 11:37:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:39.221 11:37:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:39.221 11:37:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:39.221 11:37:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:18:39.221 11:37:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:39.221 11:37:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:18:39.221 11:37:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:18:39.221 11:37:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:18:39.221 11:37:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmY4NDc0NjU4NWNlMWNkMzA4NzY1ODYyNWNjNThlYmJiNmVmMjQ1ZTVmMjM5NTRmpsXYMw==: 00:18:39.221 11:37:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YWQyMDI2ZjEyMWE1YTEyZDgwNDFlODljZjRmZmY0NDJmMTk4ODQ2YTE2NWViM2I0QEy44g==: 00:18:39.221 11:37:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:18:39.221 11:37:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:18:39.221 11:37:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmY4NDc0NjU4NWNlMWNkMzA4NzY1ODYyNWNjNThlYmJiNmVmMjQ1ZTVmMjM5NTRmpsXYMw==: 00:18:39.221 11:37:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YWQyMDI2ZjEyMWE1YTEyZDgwNDFlODljZjRmZmY0NDJmMTk4ODQ2YTE2NWViM2I0QEy44g==: ]] 00:18:39.221 11:37:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YWQyMDI2ZjEyMWE1YTEyZDgwNDFlODljZjRmZmY0NDJmMTk4ODQ2YTE2NWViM2I0QEy44g==: 00:18:39.221 11:37:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:18:39.221 11:37:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:39.221 11:37:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:18:39.221 11:37:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:18:39.221 11:37:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:18:39.221 11:37:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:39.221 11:37:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:39.221 11:37:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:39.221 11:37:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:39.479 11:37:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:39.479 11:37:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:39.479 11:37:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:39.479 11:37:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:39.479 11:37:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:39.479 11:37:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:39.479 11:37:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:39.479 11:37:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:39.479 11:37:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:39.479 11:37:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:39.479 11:37:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:39.479 11:37:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:39.479 11:37:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:39.479 11:37:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:39.479 11:37:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:39.479 nvme0n1 00:18:39.479 11:37:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:39.479 11:37:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:39.479 11:37:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:39.479 11:37:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:39.479 11:37:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:39.479 11:37:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:39.479 11:37:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:39.479 11:37:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:39.479 11:37:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:39.480 11:37:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:39.480 11:37:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:39.480 11:37:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:39.480 11:37:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:18:39.480 11:37:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:39.480 11:37:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:18:39.480 11:37:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:18:39.480 11:37:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:18:39.480 11:37:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YjFmMjBiNjk1MzE1ZDk4MmJlOWU1M2JkN2YyNGExM2HwY52g: 00:18:39.480 11:37:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NTgwODYyNGExMGExZTI0ZGYyODhiODY1ZDgxZDUwYjYjGHta: 00:18:39.480 11:37:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:18:39.480 11:37:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:18:39.480 11:37:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YjFmMjBiNjk1MzE1ZDk4MmJlOWU1M2JkN2YyNGExM2HwY52g: 00:18:39.480 11:37:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NTgwODYyNGExMGExZTI0ZGYyODhiODY1ZDgxZDUwYjYjGHta: ]] 00:18:39.480 11:37:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NTgwODYyNGExMGExZTI0ZGYyODhiODY1ZDgxZDUwYjYjGHta: 00:18:39.480 11:37:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:18:39.480 11:37:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:39.480 11:37:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:18:39.480 11:37:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:18:39.480 11:37:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:18:39.480 11:37:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:39.480 11:37:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:39.480 11:37:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:39.480 11:37:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:39.480 11:37:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:39.480 11:37:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:39.480 11:37:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:39.480 11:37:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:39.480 11:37:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:39.480 11:37:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:39.480 11:37:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:39.480 11:37:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:39.480 11:37:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:39.480 11:37:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:39.480 11:37:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:39.480 11:37:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:39.480 11:37:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:39.480 11:37:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:39.480 11:37:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:39.738 nvme0n1 00:18:39.738 11:37:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:39.738 11:37:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:39.738 11:37:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:39.738 11:37:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:39.738 11:37:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:39.738 11:37:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:39.738 11:37:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:39.738 11:37:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:39.738 11:37:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:39.738 11:37:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:39.738 11:37:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:39.738 11:37:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:39.738 11:37:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:18:39.738 11:37:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:39.738 11:37:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:18:39.738 11:37:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:18:39.738 11:37:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:18:39.738 11:37:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZTU4M2MwODViYzk0MDg5YTQxZGJjMmFhMTk5YzI0NDBlOWQwZDAwYjEwOGYxYzc0ff6NVg==: 00:18:39.738 11:37:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZTM2Nzg5ZWY4YjFkMjAxMzUyZWZlMzk0YzVkZGIxYWL8FueM: 00:18:39.738 11:37:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:18:39.738 11:37:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:18:39.738 11:37:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZTU4M2MwODViYzk0MDg5YTQxZGJjMmFhMTk5YzI0NDBlOWQwZDAwYjEwOGYxYzc0ff6NVg==: 00:18:39.738 11:37:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZTM2Nzg5ZWY4YjFkMjAxMzUyZWZlMzk0YzVkZGIxYWL8FueM: ]] 00:18:39.738 11:37:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZTM2Nzg5ZWY4YjFkMjAxMzUyZWZlMzk0YzVkZGIxYWL8FueM: 00:18:39.738 11:37:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:18:39.738 11:37:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:39.738 11:37:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:18:39.738 11:37:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:18:39.738 11:37:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:18:39.738 11:37:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:39.738 11:37:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:39.738 11:37:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:39.738 11:37:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:39.738 11:37:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:39.738 11:37:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:39.738 11:37:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:39.738 11:37:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:39.738 11:37:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:39.738 11:37:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:39.738 11:37:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:39.738 11:37:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:39.738 11:37:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:39.738 11:37:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:39.738 11:37:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:39.738 11:37:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:39.738 11:37:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:18:39.738 11:37:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:39.738 11:37:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:39.995 nvme0n1 00:18:39.995 11:37:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:39.995 11:37:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:39.995 11:37:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:39.995 11:37:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:39.995 11:37:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:39.995 11:37:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:39.995 11:37:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:39.995 11:37:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:39.995 11:37:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:39.995 11:37:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:39.995 11:37:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:39.995 11:37:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:39.995 11:37:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:18:39.995 11:37:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:39.995 11:37:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:18:39.995 11:37:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:18:39.995 11:37:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:18:39.995 11:37:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NWYyMTRjMmNkYmU1YmQ5ZGRlY2ZiM2MxN2Q5MDliYzczY2Y4MjE2ZGE2ZmMyMWY3NWNjMzU1NjcwYmNhMGMxNDRIF/Y=: 00:18:39.995 11:37:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:18:39.995 11:37:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:18:39.995 11:37:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:18:39.995 11:37:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NWYyMTRjMmNkYmU1YmQ5ZGRlY2ZiM2MxN2Q5MDliYzczY2Y4MjE2ZGE2ZmMyMWY3NWNjMzU1NjcwYmNhMGMxNDRIF/Y=: 00:18:39.995 11:37:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:18:39.995 11:37:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:18:39.995 11:37:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:39.995 11:37:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:18:39.995 11:37:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:18:39.995 11:37:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:18:39.995 11:37:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:39.995 11:37:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:39.995 11:37:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:39.995 11:37:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:39.995 11:37:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:39.995 11:37:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:39.995 11:37:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:39.995 11:37:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:39.995 11:37:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:39.995 11:37:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:39.995 11:37:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:39.995 11:37:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:39.995 11:37:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:39.995 11:37:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:39.995 11:37:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:39.995 11:37:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:39.995 11:37:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:18:39.995 11:37:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:39.995 11:37:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:39.995 nvme0n1 00:18:39.995 11:37:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:39.995 11:37:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:39.995 11:37:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:39.995 11:37:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:39.995 11:37:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:40.253 11:37:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:40.253 11:37:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:40.253 11:37:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:40.254 11:37:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:40.254 11:37:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:40.254 11:37:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:40.254 11:37:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:18:40.254 11:37:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:40.254 11:37:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:18:40.254 11:37:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:40.254 11:37:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:18:40.254 11:37:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:18:40.254 11:37:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:18:40.254 11:37:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTU5ODIxYmY5YjgyMjc1NDdmZmIxMmMwMDRlNWZiZmQJ6vNU: 00:18:40.254 11:37:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZjY3ZTNiYzAwOGUxZmMxN2M0OTEzMTEzM2RkNWVlZTM5Nzc2ZWFmNDcwNzQyYTY4NDUxYWM0NWU4MzIwZGVkM8EpFWk=: 00:18:40.254 11:37:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:18:40.254 11:37:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:18:40.827 11:37:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTU5ODIxYmY5YjgyMjc1NDdmZmIxMmMwMDRlNWZiZmQJ6vNU: 00:18:40.827 11:37:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZjY3ZTNiYzAwOGUxZmMxN2M0OTEzMTEzM2RkNWVlZTM5Nzc2ZWFmNDcwNzQyYTY4NDUxYWM0NWU4MzIwZGVkM8EpFWk=: ]] 00:18:40.827 11:37:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZjY3ZTNiYzAwOGUxZmMxN2M0OTEzMTEzM2RkNWVlZTM5Nzc2ZWFmNDcwNzQyYTY4NDUxYWM0NWU4MzIwZGVkM8EpFWk=: 00:18:40.827 11:37:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:18:40.827 11:37:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:40.827 11:37:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:18:40.827 11:37:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:18:40.827 11:37:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:18:40.827 11:37:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:40.827 11:37:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:40.827 11:37:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:40.827 11:37:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:40.827 11:37:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:40.827 11:37:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:40.827 11:37:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:40.827 11:37:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:40.827 11:37:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:40.827 11:37:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:40.827 11:37:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:40.827 11:37:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:40.827 11:37:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:40.827 11:37:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:40.827 11:37:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:40.827 11:37:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:40.827 11:37:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:40.827 11:37:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:40.827 11:37:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:41.086 nvme0n1 00:18:41.087 11:37:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:41.087 11:37:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:41.087 11:37:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:41.087 11:37:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:41.087 11:37:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:41.087 11:37:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:41.087 11:37:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:41.087 11:37:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:41.087 11:37:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:41.087 11:37:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:41.087 11:37:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:41.087 11:37:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:41.087 11:37:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:18:41.087 11:37:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:41.087 11:37:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:18:41.087 11:37:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:18:41.087 11:37:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:18:41.087 11:37:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmY4NDc0NjU4NWNlMWNkMzA4NzY1ODYyNWNjNThlYmJiNmVmMjQ1ZTVmMjM5NTRmpsXYMw==: 00:18:41.087 11:37:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YWQyMDI2ZjEyMWE1YTEyZDgwNDFlODljZjRmZmY0NDJmMTk4ODQ2YTE2NWViM2I0QEy44g==: 00:18:41.087 11:37:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:18:41.087 11:37:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:18:41.087 11:37:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmY4NDc0NjU4NWNlMWNkMzA4NzY1ODYyNWNjNThlYmJiNmVmMjQ1ZTVmMjM5NTRmpsXYMw==: 00:18:41.087 11:37:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YWQyMDI2ZjEyMWE1YTEyZDgwNDFlODljZjRmZmY0NDJmMTk4ODQ2YTE2NWViM2I0QEy44g==: ]] 00:18:41.087 11:37:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YWQyMDI2ZjEyMWE1YTEyZDgwNDFlODljZjRmZmY0NDJmMTk4ODQ2YTE2NWViM2I0QEy44g==: 00:18:41.087 11:37:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:18:41.087 11:37:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:41.087 11:37:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:18:41.087 11:37:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:18:41.087 11:37:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:18:41.087 11:37:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:41.087 11:37:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:41.087 11:37:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:41.087 11:37:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:41.087 11:37:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:41.087 11:37:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:41.087 11:37:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:41.087 11:37:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:41.087 11:37:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:41.087 11:37:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:41.087 11:37:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:41.087 11:37:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:41.087 11:37:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:41.087 11:37:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:41.087 11:37:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:41.087 11:37:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:41.087 11:37:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:41.087 11:37:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:41.087 11:37:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:41.346 nvme0n1 00:18:41.346 11:37:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:41.346 11:37:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:41.346 11:37:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:41.346 11:37:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:41.346 11:37:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:41.346 11:37:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:41.346 11:37:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:41.346 11:37:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:41.346 11:37:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:41.346 11:37:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:41.346 11:37:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:41.346 11:37:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:41.346 11:37:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:18:41.346 11:37:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:41.346 11:37:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:18:41.346 11:37:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:18:41.346 11:37:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:18:41.346 11:37:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YjFmMjBiNjk1MzE1ZDk4MmJlOWU1M2JkN2YyNGExM2HwY52g: 00:18:41.346 11:37:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NTgwODYyNGExMGExZTI0ZGYyODhiODY1ZDgxZDUwYjYjGHta: 00:18:41.346 11:37:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:18:41.346 11:37:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:18:41.346 11:37:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YjFmMjBiNjk1MzE1ZDk4MmJlOWU1M2JkN2YyNGExM2HwY52g: 00:18:41.346 11:37:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NTgwODYyNGExMGExZTI0ZGYyODhiODY1ZDgxZDUwYjYjGHta: ]] 00:18:41.346 11:37:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NTgwODYyNGExMGExZTI0ZGYyODhiODY1ZDgxZDUwYjYjGHta: 00:18:41.346 11:37:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:18:41.346 11:37:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:41.346 11:37:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:18:41.346 11:37:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:18:41.346 11:37:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:18:41.346 11:37:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:41.346 11:37:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:41.346 11:37:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:41.346 11:37:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:41.346 11:37:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:41.346 11:37:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:41.346 11:37:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:41.346 11:37:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:41.346 11:37:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:41.346 11:37:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:41.346 11:37:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:41.346 11:37:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:41.346 11:37:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:41.346 11:37:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:41.346 11:37:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:41.346 11:37:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:41.346 11:37:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:41.346 11:37:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:41.347 11:37:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:41.605 nvme0n1 00:18:41.605 11:37:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:41.605 11:37:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:41.605 11:37:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:41.605 11:37:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:41.605 11:37:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:41.605 11:37:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:41.605 11:37:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:41.605 11:37:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:41.605 11:37:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:41.605 11:37:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:41.605 11:37:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:41.605 11:37:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:41.605 11:37:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:18:41.605 11:37:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:41.605 11:37:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:18:41.605 11:37:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:18:41.605 11:37:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:18:41.605 11:37:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZTU4M2MwODViYzk0MDg5YTQxZGJjMmFhMTk5YzI0NDBlOWQwZDAwYjEwOGYxYzc0ff6NVg==: 00:18:41.605 11:37:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZTM2Nzg5ZWY4YjFkMjAxMzUyZWZlMzk0YzVkZGIxYWL8FueM: 00:18:41.605 11:37:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:18:41.605 11:37:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:18:41.605 11:37:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZTU4M2MwODViYzk0MDg5YTQxZGJjMmFhMTk5YzI0NDBlOWQwZDAwYjEwOGYxYzc0ff6NVg==: 00:18:41.605 11:37:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZTM2Nzg5ZWY4YjFkMjAxMzUyZWZlMzk0YzVkZGIxYWL8FueM: ]] 00:18:41.605 11:37:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZTM2Nzg5ZWY4YjFkMjAxMzUyZWZlMzk0YzVkZGIxYWL8FueM: 00:18:41.605 11:37:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:18:41.605 11:37:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:41.605 11:37:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:18:41.605 11:37:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:18:41.605 11:37:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:18:41.605 11:37:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:41.605 11:37:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:41.605 11:37:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:41.605 11:37:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:41.605 11:37:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:41.605 11:37:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:41.605 11:37:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:41.606 11:37:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:41.606 11:37:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:41.606 11:37:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:41.606 11:37:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:41.606 11:37:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:41.606 11:37:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:41.606 11:37:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:41.606 11:37:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:41.606 11:37:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:41.606 11:37:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:18:41.606 11:37:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:41.606 11:37:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:41.864 nvme0n1 00:18:41.864 11:37:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:41.864 11:37:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:41.864 11:37:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:41.864 11:37:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:41.864 11:37:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:41.864 11:37:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:41.864 11:37:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:41.864 11:37:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:41.864 11:37:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:41.864 11:37:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:41.864 11:37:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:41.864 11:37:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:41.864 11:37:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:18:41.864 11:37:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:41.864 11:37:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:18:41.864 11:37:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:18:41.864 11:37:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:18:41.864 11:37:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NWYyMTRjMmNkYmU1YmQ5ZGRlY2ZiM2MxN2Q5MDliYzczY2Y4MjE2ZGE2ZmMyMWY3NWNjMzU1NjcwYmNhMGMxNDRIF/Y=: 00:18:41.864 11:37:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:18:41.864 11:37:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:18:41.864 11:37:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:18:41.864 11:37:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NWYyMTRjMmNkYmU1YmQ5ZGRlY2ZiM2MxN2Q5MDliYzczY2Y4MjE2ZGE2ZmMyMWY3NWNjMzU1NjcwYmNhMGMxNDRIF/Y=: 00:18:41.864 11:37:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:18:41.864 11:37:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:18:41.864 11:37:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:41.864 11:37:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:18:41.864 11:37:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:18:41.864 11:37:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:18:41.864 11:37:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:41.864 11:37:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:41.864 11:37:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:41.864 11:37:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:41.864 11:37:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:41.864 11:37:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:41.864 11:37:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:41.864 11:37:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:41.864 11:37:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:41.864 11:37:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:41.864 11:37:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:41.864 11:37:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:41.864 11:37:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:41.864 11:37:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:41.864 11:37:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:41.864 11:37:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:41.864 11:37:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:18:41.864 11:37:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:41.864 11:37:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:42.122 nvme0n1 00:18:42.122 11:37:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:42.122 11:37:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:42.122 11:37:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:42.122 11:37:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:42.122 11:37:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:42.122 11:37:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:42.122 11:37:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:42.122 11:37:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:42.122 11:37:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:42.122 11:37:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:42.122 11:37:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:42.122 11:37:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:18:42.122 11:37:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:42.122 11:37:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:18:42.122 11:37:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:42.122 11:37:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:18:42.122 11:37:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:18:42.122 11:37:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:18:42.122 11:37:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTU5ODIxYmY5YjgyMjc1NDdmZmIxMmMwMDRlNWZiZmQJ6vNU: 00:18:42.122 11:37:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZjY3ZTNiYzAwOGUxZmMxN2M0OTEzMTEzM2RkNWVlZTM5Nzc2ZWFmNDcwNzQyYTY4NDUxYWM0NWU4MzIwZGVkM8EpFWk=: 00:18:42.122 11:37:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:18:42.122 11:37:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:18:44.022 11:37:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTU5ODIxYmY5YjgyMjc1NDdmZmIxMmMwMDRlNWZiZmQJ6vNU: 00:18:44.022 11:37:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZjY3ZTNiYzAwOGUxZmMxN2M0OTEzMTEzM2RkNWVlZTM5Nzc2ZWFmNDcwNzQyYTY4NDUxYWM0NWU4MzIwZGVkM8EpFWk=: ]] 00:18:44.022 11:37:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZjY3ZTNiYzAwOGUxZmMxN2M0OTEzMTEzM2RkNWVlZTM5Nzc2ZWFmNDcwNzQyYTY4NDUxYWM0NWU4MzIwZGVkM8EpFWk=: 00:18:44.022 11:37:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:18:44.022 11:37:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:44.022 11:37:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:18:44.022 11:37:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:18:44.022 11:37:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:18:44.022 11:37:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:44.022 11:37:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:44.022 11:37:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:44.022 11:37:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:44.022 11:37:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:44.022 11:37:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:44.022 11:37:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:44.022 11:37:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:44.022 11:37:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:44.022 11:37:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:44.022 11:37:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:44.022 11:37:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:44.022 11:37:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:44.022 11:37:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:44.022 11:37:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:44.022 11:37:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:44.022 11:37:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:44.022 11:37:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:44.022 11:37:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:44.280 nvme0n1 00:18:44.280 11:37:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:44.280 11:37:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:44.280 11:37:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:44.280 11:37:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:44.280 11:37:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:44.280 11:37:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:44.538 11:37:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:44.538 11:37:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:44.538 11:37:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:44.539 11:37:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:44.539 11:37:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:44.539 11:37:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:44.539 11:37:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:18:44.539 11:37:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:44.539 11:37:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:18:44.539 11:37:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:18:44.539 11:37:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:18:44.539 11:37:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmY4NDc0NjU4NWNlMWNkMzA4NzY1ODYyNWNjNThlYmJiNmVmMjQ1ZTVmMjM5NTRmpsXYMw==: 00:18:44.539 11:37:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YWQyMDI2ZjEyMWE1YTEyZDgwNDFlODljZjRmZmY0NDJmMTk4ODQ2YTE2NWViM2I0QEy44g==: 00:18:44.539 11:37:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:18:44.539 11:37:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:18:44.539 11:37:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmY4NDc0NjU4NWNlMWNkMzA4NzY1ODYyNWNjNThlYmJiNmVmMjQ1ZTVmMjM5NTRmpsXYMw==: 00:18:44.539 11:37:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YWQyMDI2ZjEyMWE1YTEyZDgwNDFlODljZjRmZmY0NDJmMTk4ODQ2YTE2NWViM2I0QEy44g==: ]] 00:18:44.539 11:37:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YWQyMDI2ZjEyMWE1YTEyZDgwNDFlODljZjRmZmY0NDJmMTk4ODQ2YTE2NWViM2I0QEy44g==: 00:18:44.539 11:37:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:18:44.539 11:37:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:44.539 11:37:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:18:44.539 11:37:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:18:44.539 11:37:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:18:44.539 11:37:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:44.539 11:37:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:44.539 11:37:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:44.539 11:37:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:44.539 11:37:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:44.539 11:37:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:44.539 11:37:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:44.539 11:37:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:44.539 11:37:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:44.539 11:37:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:44.539 11:37:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:44.539 11:37:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:44.539 11:37:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:44.539 11:37:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:44.539 11:37:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:44.539 11:37:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:44.539 11:37:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:44.539 11:37:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:44.539 11:37:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:44.801 nvme0n1 00:18:44.801 11:37:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:44.801 11:37:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:44.801 11:37:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:44.801 11:37:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:44.801 11:37:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:44.801 11:37:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:44.801 11:37:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:44.801 11:37:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:44.801 11:37:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:44.801 11:37:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:44.801 11:37:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:44.801 11:37:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:44.801 11:37:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:18:44.801 11:37:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:44.801 11:37:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:18:44.801 11:37:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:18:44.801 11:37:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:18:44.801 11:37:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YjFmMjBiNjk1MzE1ZDk4MmJlOWU1M2JkN2YyNGExM2HwY52g: 00:18:44.801 11:37:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NTgwODYyNGExMGExZTI0ZGYyODhiODY1ZDgxZDUwYjYjGHta: 00:18:44.801 11:37:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:18:44.801 11:37:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:18:44.801 11:37:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YjFmMjBiNjk1MzE1ZDk4MmJlOWU1M2JkN2YyNGExM2HwY52g: 00:18:44.801 11:37:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NTgwODYyNGExMGExZTI0ZGYyODhiODY1ZDgxZDUwYjYjGHta: ]] 00:18:44.801 11:37:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NTgwODYyNGExMGExZTI0ZGYyODhiODY1ZDgxZDUwYjYjGHta: 00:18:44.801 11:37:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:18:44.801 11:37:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:44.801 11:37:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:18:44.801 11:37:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:18:44.802 11:37:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:18:44.802 11:37:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:44.802 11:37:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:44.802 11:37:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:44.802 11:37:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:44.802 11:37:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:44.802 11:37:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:44.802 11:37:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:44.802 11:37:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:44.802 11:37:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:44.802 11:37:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:44.802 11:37:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:44.802 11:37:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:44.802 11:37:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:44.802 11:37:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:44.802 11:37:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:44.802 11:37:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:44.802 11:37:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:44.802 11:37:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:44.802 11:37:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:45.370 nvme0n1 00:18:45.370 11:37:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:45.370 11:37:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:45.370 11:37:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:45.370 11:37:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:45.370 11:37:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:45.370 11:37:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:45.370 11:37:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:45.370 11:37:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:45.370 11:37:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:45.370 11:37:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:45.370 11:37:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:45.370 11:37:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:45.370 11:37:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:18:45.370 11:37:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:45.370 11:37:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:18:45.370 11:37:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:18:45.370 11:37:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:18:45.370 11:37:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZTU4M2MwODViYzk0MDg5YTQxZGJjMmFhMTk5YzI0NDBlOWQwZDAwYjEwOGYxYzc0ff6NVg==: 00:18:45.370 11:37:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZTM2Nzg5ZWY4YjFkMjAxMzUyZWZlMzk0YzVkZGIxYWL8FueM: 00:18:45.370 11:37:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:18:45.370 11:37:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:18:45.370 11:37:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZTU4M2MwODViYzk0MDg5YTQxZGJjMmFhMTk5YzI0NDBlOWQwZDAwYjEwOGYxYzc0ff6NVg==: 00:18:45.370 11:37:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZTM2Nzg5ZWY4YjFkMjAxMzUyZWZlMzk0YzVkZGIxYWL8FueM: ]] 00:18:45.370 11:37:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZTM2Nzg5ZWY4YjFkMjAxMzUyZWZlMzk0YzVkZGIxYWL8FueM: 00:18:45.370 11:37:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:18:45.370 11:37:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:45.370 11:37:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:18:45.370 11:37:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:18:45.370 11:37:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:18:45.370 11:37:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:45.370 11:37:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:45.370 11:37:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:45.370 11:37:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:45.370 11:37:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:45.370 11:37:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:45.370 11:37:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:45.370 11:37:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:45.370 11:37:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:45.370 11:37:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:45.370 11:37:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:45.370 11:37:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:45.370 11:37:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:45.370 11:37:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:45.370 11:37:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:45.370 11:37:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:45.370 11:37:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:18:45.370 11:37:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:45.370 11:37:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:45.628 nvme0n1 00:18:45.628 11:37:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:45.628 11:37:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:45.628 11:37:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:45.628 11:37:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:45.628 11:37:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:45.628 11:37:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:45.628 11:37:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:45.628 11:37:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:45.628 11:37:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:45.628 11:37:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:45.628 11:37:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:45.628 11:37:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:45.628 11:37:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:18:45.628 11:37:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:45.628 11:37:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:18:45.628 11:37:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:18:45.628 11:37:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:18:45.628 11:37:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NWYyMTRjMmNkYmU1YmQ5ZGRlY2ZiM2MxN2Q5MDliYzczY2Y4MjE2ZGE2ZmMyMWY3NWNjMzU1NjcwYmNhMGMxNDRIF/Y=: 00:18:45.628 11:37:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:18:45.628 11:37:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:18:45.628 11:37:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:18:45.629 11:37:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NWYyMTRjMmNkYmU1YmQ5ZGRlY2ZiM2MxN2Q5MDliYzczY2Y4MjE2ZGE2ZmMyMWY3NWNjMzU1NjcwYmNhMGMxNDRIF/Y=: 00:18:45.629 11:37:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:18:45.629 11:37:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:18:45.629 11:37:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:45.629 11:37:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:18:45.629 11:37:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:18:45.629 11:37:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:18:45.629 11:37:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:45.629 11:37:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:45.629 11:37:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:45.629 11:37:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:45.629 11:37:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:45.885 11:37:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:45.885 11:37:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:45.885 11:37:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:45.885 11:37:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:45.885 11:37:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:45.885 11:37:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:45.885 11:37:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:45.885 11:37:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:45.885 11:37:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:45.885 11:37:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:45.885 11:37:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:45.885 11:37:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:18:45.885 11:37:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:45.885 11:37:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:46.142 nvme0n1 00:18:46.142 11:37:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:46.142 11:37:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:46.142 11:37:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:46.142 11:37:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:46.142 11:37:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:46.142 11:37:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:46.142 11:37:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:46.142 11:37:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:46.142 11:37:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:46.142 11:37:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:46.142 11:37:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:46.142 11:37:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:18:46.142 11:37:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:46.142 11:37:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:18:46.142 11:37:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:46.142 11:37:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:18:46.142 11:37:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:18:46.142 11:37:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:18:46.142 11:37:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTU5ODIxYmY5YjgyMjc1NDdmZmIxMmMwMDRlNWZiZmQJ6vNU: 00:18:46.142 11:37:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZjY3ZTNiYzAwOGUxZmMxN2M0OTEzMTEzM2RkNWVlZTM5Nzc2ZWFmNDcwNzQyYTY4NDUxYWM0NWU4MzIwZGVkM8EpFWk=: 00:18:46.142 11:37:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:18:46.142 11:37:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:18:46.142 11:37:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTU5ODIxYmY5YjgyMjc1NDdmZmIxMmMwMDRlNWZiZmQJ6vNU: 00:18:46.142 11:37:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZjY3ZTNiYzAwOGUxZmMxN2M0OTEzMTEzM2RkNWVlZTM5Nzc2ZWFmNDcwNzQyYTY4NDUxYWM0NWU4MzIwZGVkM8EpFWk=: ]] 00:18:46.142 11:37:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZjY3ZTNiYzAwOGUxZmMxN2M0OTEzMTEzM2RkNWVlZTM5Nzc2ZWFmNDcwNzQyYTY4NDUxYWM0NWU4MzIwZGVkM8EpFWk=: 00:18:46.142 11:37:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:18:46.142 11:37:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:46.142 11:37:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:18:46.142 11:37:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:18:46.142 11:37:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:18:46.142 11:37:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:46.142 11:37:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:46.142 11:37:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:46.142 11:37:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:46.142 11:37:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:46.142 11:37:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:46.142 11:37:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:46.142 11:37:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:46.142 11:37:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:46.142 11:37:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:46.142 11:37:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:46.142 11:37:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:46.142 11:37:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:46.142 11:37:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:46.142 11:37:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:46.142 11:37:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:46.142 11:37:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:46.142 11:37:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:46.142 11:37:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:46.706 nvme0n1 00:18:46.706 11:37:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:46.706 11:37:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:46.706 11:37:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:46.706 11:37:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:46.706 11:37:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:46.706 11:37:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:46.964 11:37:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:46.964 11:37:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:46.964 11:37:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:46.964 11:37:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:46.964 11:37:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:46.964 11:37:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:46.964 11:37:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:18:46.964 11:37:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:46.964 11:37:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:18:46.964 11:37:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:18:46.964 11:37:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:18:46.964 11:37:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmY4NDc0NjU4NWNlMWNkMzA4NzY1ODYyNWNjNThlYmJiNmVmMjQ1ZTVmMjM5NTRmpsXYMw==: 00:18:46.964 11:37:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YWQyMDI2ZjEyMWE1YTEyZDgwNDFlODljZjRmZmY0NDJmMTk4ODQ2YTE2NWViM2I0QEy44g==: 00:18:46.964 11:37:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:18:46.964 11:37:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:18:46.964 11:37:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmY4NDc0NjU4NWNlMWNkMzA4NzY1ODYyNWNjNThlYmJiNmVmMjQ1ZTVmMjM5NTRmpsXYMw==: 00:18:46.964 11:37:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YWQyMDI2ZjEyMWE1YTEyZDgwNDFlODljZjRmZmY0NDJmMTk4ODQ2YTE2NWViM2I0QEy44g==: ]] 00:18:46.964 11:37:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YWQyMDI2ZjEyMWE1YTEyZDgwNDFlODljZjRmZmY0NDJmMTk4ODQ2YTE2NWViM2I0QEy44g==: 00:18:46.964 11:37:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:18:46.964 11:37:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:46.964 11:37:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:18:46.964 11:37:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:18:46.964 11:37:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:18:46.964 11:37:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:46.964 11:37:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:46.964 11:37:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:46.964 11:37:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:46.964 11:37:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:46.964 11:37:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:46.964 11:37:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:46.964 11:37:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:46.964 11:37:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:46.964 11:37:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:46.964 11:37:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:46.964 11:37:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:46.964 11:37:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:46.964 11:37:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:46.964 11:37:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:46.964 11:37:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:46.964 11:37:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:46.964 11:37:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:46.965 11:37:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:47.530 nvme0n1 00:18:47.530 11:37:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:47.530 11:37:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:47.530 11:37:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:47.530 11:37:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:47.531 11:37:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:47.531 11:37:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:47.531 11:37:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:47.531 11:37:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:47.531 11:37:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:47.531 11:37:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:47.531 11:37:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:47.531 11:37:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:47.531 11:37:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:18:47.531 11:37:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:47.531 11:37:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:18:47.531 11:37:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:18:47.531 11:37:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:18:47.531 11:37:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YjFmMjBiNjk1MzE1ZDk4MmJlOWU1M2JkN2YyNGExM2HwY52g: 00:18:47.531 11:37:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NTgwODYyNGExMGExZTI0ZGYyODhiODY1ZDgxZDUwYjYjGHta: 00:18:47.531 11:37:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:18:47.531 11:37:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:18:47.531 11:37:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YjFmMjBiNjk1MzE1ZDk4MmJlOWU1M2JkN2YyNGExM2HwY52g: 00:18:47.531 11:37:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NTgwODYyNGExMGExZTI0ZGYyODhiODY1ZDgxZDUwYjYjGHta: ]] 00:18:47.531 11:37:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NTgwODYyNGExMGExZTI0ZGYyODhiODY1ZDgxZDUwYjYjGHta: 00:18:47.531 11:37:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:18:47.531 11:37:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:47.531 11:37:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:18:47.531 11:37:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:18:47.531 11:37:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:18:47.531 11:37:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:47.531 11:37:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:47.531 11:37:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:47.531 11:37:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:47.531 11:37:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:47.531 11:37:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:47.531 11:37:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:47.531 11:37:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:47.531 11:37:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:47.531 11:37:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:47.531 11:37:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:47.531 11:37:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:47.531 11:37:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:47.531 11:37:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:47.531 11:37:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:47.531 11:37:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:47.531 11:37:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:47.531 11:37:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:47.531 11:37:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:48.097 nvme0n1 00:18:48.097 11:37:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:48.097 11:37:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:48.097 11:37:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:48.097 11:37:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:48.097 11:37:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:48.355 11:37:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:48.355 11:37:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:48.355 11:37:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:48.355 11:37:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:48.355 11:37:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:48.355 11:37:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:48.355 11:37:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:48.355 11:37:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:18:48.355 11:37:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:48.355 11:37:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:18:48.355 11:37:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:18:48.355 11:37:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:18:48.355 11:37:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZTU4M2MwODViYzk0MDg5YTQxZGJjMmFhMTk5YzI0NDBlOWQwZDAwYjEwOGYxYzc0ff6NVg==: 00:18:48.355 11:37:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZTM2Nzg5ZWY4YjFkMjAxMzUyZWZlMzk0YzVkZGIxYWL8FueM: 00:18:48.355 11:37:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:18:48.355 11:37:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:18:48.355 11:37:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZTU4M2MwODViYzk0MDg5YTQxZGJjMmFhMTk5YzI0NDBlOWQwZDAwYjEwOGYxYzc0ff6NVg==: 00:18:48.355 11:37:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZTM2Nzg5ZWY4YjFkMjAxMzUyZWZlMzk0YzVkZGIxYWL8FueM: ]] 00:18:48.355 11:37:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZTM2Nzg5ZWY4YjFkMjAxMzUyZWZlMzk0YzVkZGIxYWL8FueM: 00:18:48.355 11:37:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:18:48.355 11:37:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:48.355 11:37:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:18:48.355 11:37:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:18:48.355 11:37:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:18:48.356 11:37:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:48.356 11:37:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:48.356 11:37:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:48.356 11:37:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:48.356 11:37:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:48.356 11:37:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:48.356 11:37:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:48.356 11:37:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:48.356 11:37:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:48.356 11:37:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:48.356 11:37:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:48.356 11:37:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:48.356 11:37:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:48.356 11:37:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:48.356 11:37:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:48.356 11:37:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:48.356 11:37:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:18:48.356 11:37:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:48.356 11:37:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:48.922 nvme0n1 00:18:48.922 11:37:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:48.922 11:37:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:48.922 11:37:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:48.922 11:37:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:48.922 11:37:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:48.922 11:37:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:48.922 11:37:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:48.922 11:37:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:48.922 11:37:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:48.922 11:37:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:48.922 11:37:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:48.922 11:37:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:48.922 11:37:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:18:48.922 11:37:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:48.922 11:37:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:18:48.922 11:37:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:18:48.922 11:37:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:18:48.922 11:37:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NWYyMTRjMmNkYmU1YmQ5ZGRlY2ZiM2MxN2Q5MDliYzczY2Y4MjE2ZGE2ZmMyMWY3NWNjMzU1NjcwYmNhMGMxNDRIF/Y=: 00:18:48.922 11:37:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:18:48.922 11:37:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:18:48.922 11:37:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:18:48.922 11:37:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NWYyMTRjMmNkYmU1YmQ5ZGRlY2ZiM2MxN2Q5MDliYzczY2Y4MjE2ZGE2ZmMyMWY3NWNjMzU1NjcwYmNhMGMxNDRIF/Y=: 00:18:48.922 11:37:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:18:48.922 11:37:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:18:48.922 11:37:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:48.922 11:37:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:18:48.922 11:37:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:18:48.922 11:37:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:18:48.922 11:37:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:48.922 11:37:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:48.922 11:37:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:48.922 11:37:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:48.922 11:37:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:48.922 11:37:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:48.923 11:37:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:48.923 11:37:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:48.923 11:37:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:48.923 11:37:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:48.923 11:37:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:48.923 11:37:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:48.923 11:37:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:48.923 11:37:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:48.923 11:37:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:48.923 11:37:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:48.923 11:37:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:18:48.923 11:37:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:48.923 11:37:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:49.858 nvme0n1 00:18:49.858 11:37:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:49.858 11:37:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:49.858 11:37:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:49.858 11:37:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:49.858 11:37:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:49.858 11:37:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:49.858 11:37:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:49.858 11:37:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:49.858 11:37:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:49.858 11:37:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:49.858 11:37:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:49.858 11:37:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:18:49.858 11:37:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:18:49.858 11:37:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:49.858 11:37:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:18:49.858 11:37:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:49.858 11:37:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:18:49.858 11:37:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:18:49.858 11:37:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:18:49.858 11:37:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTU5ODIxYmY5YjgyMjc1NDdmZmIxMmMwMDRlNWZiZmQJ6vNU: 00:18:49.858 11:37:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZjY3ZTNiYzAwOGUxZmMxN2M0OTEzMTEzM2RkNWVlZTM5Nzc2ZWFmNDcwNzQyYTY4NDUxYWM0NWU4MzIwZGVkM8EpFWk=: 00:18:49.858 11:37:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:18:49.858 11:37:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:18:49.858 11:37:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTU5ODIxYmY5YjgyMjc1NDdmZmIxMmMwMDRlNWZiZmQJ6vNU: 00:18:49.858 11:37:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZjY3ZTNiYzAwOGUxZmMxN2M0OTEzMTEzM2RkNWVlZTM5Nzc2ZWFmNDcwNzQyYTY4NDUxYWM0NWU4MzIwZGVkM8EpFWk=: ]] 00:18:49.858 11:37:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZjY3ZTNiYzAwOGUxZmMxN2M0OTEzMTEzM2RkNWVlZTM5Nzc2ZWFmNDcwNzQyYTY4NDUxYWM0NWU4MzIwZGVkM8EpFWk=: 00:18:49.858 11:37:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:18:49.858 11:37:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:49.858 11:37:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:18:49.858 11:37:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:18:49.858 11:37:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:18:49.858 11:37:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:49.858 11:37:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:49.858 11:37:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:49.858 11:37:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:49.858 11:37:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:49.858 11:37:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:49.858 11:37:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:49.858 11:37:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:49.858 11:37:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:49.858 11:37:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:49.858 11:37:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:49.858 11:37:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:49.858 11:37:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:49.858 11:37:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:49.858 11:37:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:49.858 11:37:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:49.858 11:37:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:49.858 11:37:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:49.858 11:37:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:49.858 nvme0n1 00:18:49.858 11:37:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:49.858 11:37:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:49.858 11:37:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:49.858 11:37:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:49.858 11:37:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:49.858 11:37:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:49.858 11:37:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:49.858 11:37:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:49.858 11:37:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:49.858 11:37:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:49.859 11:37:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:49.859 11:37:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:49.859 11:37:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:18:49.859 11:37:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:49.859 11:37:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:18:49.859 11:37:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:18:49.859 11:37:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:18:49.859 11:37:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmY4NDc0NjU4NWNlMWNkMzA4NzY1ODYyNWNjNThlYmJiNmVmMjQ1ZTVmMjM5NTRmpsXYMw==: 00:18:49.859 11:37:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YWQyMDI2ZjEyMWE1YTEyZDgwNDFlODljZjRmZmY0NDJmMTk4ODQ2YTE2NWViM2I0QEy44g==: 00:18:49.859 11:37:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:18:49.859 11:37:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:18:49.859 11:37:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmY4NDc0NjU4NWNlMWNkMzA4NzY1ODYyNWNjNThlYmJiNmVmMjQ1ZTVmMjM5NTRmpsXYMw==: 00:18:49.859 11:37:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YWQyMDI2ZjEyMWE1YTEyZDgwNDFlODljZjRmZmY0NDJmMTk4ODQ2YTE2NWViM2I0QEy44g==: ]] 00:18:49.859 11:37:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YWQyMDI2ZjEyMWE1YTEyZDgwNDFlODljZjRmZmY0NDJmMTk4ODQ2YTE2NWViM2I0QEy44g==: 00:18:49.859 11:37:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:18:49.859 11:37:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:49.859 11:37:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:18:49.859 11:37:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:18:49.859 11:37:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:18:49.859 11:37:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:49.859 11:37:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:49.859 11:37:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:49.859 11:37:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:49.859 11:37:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:49.859 11:37:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:49.859 11:37:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:49.859 11:37:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:49.859 11:37:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:49.859 11:37:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:49.859 11:37:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:49.859 11:37:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:49.859 11:37:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:49.859 11:37:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:49.859 11:37:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:49.859 11:37:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:49.859 11:37:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:49.859 11:37:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:49.859 11:37:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:50.118 nvme0n1 00:18:50.118 11:37:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:50.118 11:37:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:50.118 11:37:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:50.118 11:37:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:50.118 11:37:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:50.118 11:37:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:50.118 11:37:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:50.118 11:37:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:50.118 11:37:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:50.118 11:37:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:50.118 11:37:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:50.118 11:37:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:50.118 11:37:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:18:50.118 11:37:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:50.118 11:37:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:18:50.118 11:37:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:18:50.118 11:37:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:18:50.118 11:37:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YjFmMjBiNjk1MzE1ZDk4MmJlOWU1M2JkN2YyNGExM2HwY52g: 00:18:50.118 11:37:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NTgwODYyNGExMGExZTI0ZGYyODhiODY1ZDgxZDUwYjYjGHta: 00:18:50.118 11:37:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:18:50.118 11:37:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:18:50.118 11:37:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YjFmMjBiNjk1MzE1ZDk4MmJlOWU1M2JkN2YyNGExM2HwY52g: 00:18:50.118 11:37:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NTgwODYyNGExMGExZTI0ZGYyODhiODY1ZDgxZDUwYjYjGHta: ]] 00:18:50.118 11:37:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NTgwODYyNGExMGExZTI0ZGYyODhiODY1ZDgxZDUwYjYjGHta: 00:18:50.118 11:37:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:18:50.118 11:37:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:50.118 11:37:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:18:50.118 11:37:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:18:50.118 11:37:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:18:50.118 11:37:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:50.118 11:37:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:50.118 11:37:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:50.118 11:37:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:50.118 11:37:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:50.118 11:37:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:50.118 11:37:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:50.118 11:37:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:50.118 11:37:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:50.118 11:37:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:50.118 11:37:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:50.118 11:37:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:50.118 11:37:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:50.118 11:37:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:50.118 11:37:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:50.118 11:37:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:50.118 11:37:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:50.118 11:37:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:50.118 11:37:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:50.118 nvme0n1 00:18:50.118 11:37:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:50.119 11:37:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:50.119 11:37:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:50.119 11:37:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:50.119 11:37:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:50.119 11:37:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:50.119 11:37:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:50.119 11:37:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:50.119 11:37:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:50.119 11:37:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:50.119 11:37:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:50.119 11:37:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:50.119 11:37:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:18:50.119 11:37:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:50.119 11:37:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:18:50.119 11:37:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:18:50.119 11:37:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:18:50.119 11:37:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZTU4M2MwODViYzk0MDg5YTQxZGJjMmFhMTk5YzI0NDBlOWQwZDAwYjEwOGYxYzc0ff6NVg==: 00:18:50.119 11:37:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZTM2Nzg5ZWY4YjFkMjAxMzUyZWZlMzk0YzVkZGIxYWL8FueM: 00:18:50.119 11:37:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:18:50.119 11:37:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:18:50.119 11:37:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZTU4M2MwODViYzk0MDg5YTQxZGJjMmFhMTk5YzI0NDBlOWQwZDAwYjEwOGYxYzc0ff6NVg==: 00:18:50.119 11:37:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZTM2Nzg5ZWY4YjFkMjAxMzUyZWZlMzk0YzVkZGIxYWL8FueM: ]] 00:18:50.119 11:37:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZTM2Nzg5ZWY4YjFkMjAxMzUyZWZlMzk0YzVkZGIxYWL8FueM: 00:18:50.119 11:37:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:18:50.119 11:37:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:50.119 11:37:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:18:50.119 11:37:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:18:50.119 11:37:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:18:50.119 11:37:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:50.119 11:37:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:50.119 11:37:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:50.119 11:37:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:50.119 11:37:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:50.119 11:37:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:50.377 11:37:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:50.377 11:37:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:50.377 11:37:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:50.377 11:37:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:50.377 11:37:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:50.377 11:37:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:50.377 11:37:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:50.377 11:37:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:50.377 11:37:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:50.377 11:37:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:50.377 11:37:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:18:50.377 11:37:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:50.377 11:37:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:50.377 nvme0n1 00:18:50.377 11:37:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:50.377 11:37:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:50.377 11:37:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:50.377 11:37:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:50.377 11:37:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:50.377 11:37:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:50.377 11:37:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:50.377 11:37:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:50.377 11:37:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:50.377 11:37:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:50.378 11:37:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:50.378 11:37:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:50.378 11:37:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:18:50.378 11:37:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:50.378 11:37:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:18:50.378 11:37:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:18:50.378 11:37:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:18:50.378 11:37:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NWYyMTRjMmNkYmU1YmQ5ZGRlY2ZiM2MxN2Q5MDliYzczY2Y4MjE2ZGE2ZmMyMWY3NWNjMzU1NjcwYmNhMGMxNDRIF/Y=: 00:18:50.378 11:37:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:18:50.378 11:37:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:18:50.378 11:37:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:18:50.378 11:37:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NWYyMTRjMmNkYmU1YmQ5ZGRlY2ZiM2MxN2Q5MDliYzczY2Y4MjE2ZGE2ZmMyMWY3NWNjMzU1NjcwYmNhMGMxNDRIF/Y=: 00:18:50.378 11:37:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:18:50.378 11:37:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:18:50.378 11:37:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:50.378 11:37:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:18:50.378 11:37:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:18:50.378 11:37:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:18:50.378 11:37:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:50.378 11:37:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:50.378 11:37:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:50.378 11:37:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:50.378 11:37:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:50.378 11:37:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:50.378 11:37:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:50.378 11:37:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:50.378 11:37:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:50.378 11:37:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:50.378 11:37:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:50.378 11:37:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:50.378 11:37:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:50.378 11:37:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:50.378 11:37:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:50.378 11:37:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:50.378 11:37:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:18:50.378 11:37:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:50.378 11:37:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:50.636 nvme0n1 00:18:50.636 11:37:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:50.636 11:37:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:50.636 11:37:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:50.637 11:37:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:50.637 11:37:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:50.637 11:37:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:50.637 11:37:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:50.637 11:37:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:50.637 11:37:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:50.637 11:37:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:50.637 11:37:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:50.637 11:37:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:18:50.637 11:37:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:50.637 11:37:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:18:50.637 11:37:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:50.637 11:37:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:18:50.637 11:37:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:18:50.637 11:37:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:18:50.637 11:37:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTU5ODIxYmY5YjgyMjc1NDdmZmIxMmMwMDRlNWZiZmQJ6vNU: 00:18:50.637 11:37:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZjY3ZTNiYzAwOGUxZmMxN2M0OTEzMTEzM2RkNWVlZTM5Nzc2ZWFmNDcwNzQyYTY4NDUxYWM0NWU4MzIwZGVkM8EpFWk=: 00:18:50.637 11:37:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:18:50.637 11:37:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:18:50.637 11:37:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTU5ODIxYmY5YjgyMjc1NDdmZmIxMmMwMDRlNWZiZmQJ6vNU: 00:18:50.637 11:37:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZjY3ZTNiYzAwOGUxZmMxN2M0OTEzMTEzM2RkNWVlZTM5Nzc2ZWFmNDcwNzQyYTY4NDUxYWM0NWU4MzIwZGVkM8EpFWk=: ]] 00:18:50.637 11:37:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZjY3ZTNiYzAwOGUxZmMxN2M0OTEzMTEzM2RkNWVlZTM5Nzc2ZWFmNDcwNzQyYTY4NDUxYWM0NWU4MzIwZGVkM8EpFWk=: 00:18:50.637 11:37:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:18:50.637 11:37:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:50.637 11:37:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:18:50.637 11:37:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:18:50.637 11:37:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:18:50.637 11:37:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:50.637 11:37:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:50.637 11:37:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:50.637 11:37:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:50.637 11:37:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:50.637 11:37:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:50.637 11:37:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:50.637 11:37:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:50.637 11:37:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:50.637 11:37:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:50.637 11:37:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:50.637 11:37:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:50.637 11:37:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:50.637 11:37:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:50.637 11:37:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:50.637 11:37:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:50.637 11:37:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:50.637 11:37:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:50.637 11:37:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:50.637 nvme0n1 00:18:50.637 11:37:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:50.637 11:37:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:50.637 11:37:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:50.637 11:37:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:50.637 11:37:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:50.637 11:37:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:50.895 11:37:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:50.895 11:37:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:50.895 11:37:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:50.895 11:37:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:50.895 11:37:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:50.895 11:37:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:50.895 11:37:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:18:50.895 11:37:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:50.895 11:37:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:18:50.895 11:37:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:18:50.896 11:37:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:18:50.896 11:37:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmY4NDc0NjU4NWNlMWNkMzA4NzY1ODYyNWNjNThlYmJiNmVmMjQ1ZTVmMjM5NTRmpsXYMw==: 00:18:50.896 11:37:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YWQyMDI2ZjEyMWE1YTEyZDgwNDFlODljZjRmZmY0NDJmMTk4ODQ2YTE2NWViM2I0QEy44g==: 00:18:50.896 11:37:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:18:50.896 11:37:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:18:50.896 11:37:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmY4NDc0NjU4NWNlMWNkMzA4NzY1ODYyNWNjNThlYmJiNmVmMjQ1ZTVmMjM5NTRmpsXYMw==: 00:18:50.896 11:37:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YWQyMDI2ZjEyMWE1YTEyZDgwNDFlODljZjRmZmY0NDJmMTk4ODQ2YTE2NWViM2I0QEy44g==: ]] 00:18:50.896 11:37:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YWQyMDI2ZjEyMWE1YTEyZDgwNDFlODljZjRmZmY0NDJmMTk4ODQ2YTE2NWViM2I0QEy44g==: 00:18:50.896 11:37:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:18:50.896 11:37:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:50.896 11:37:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:18:50.896 11:37:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:18:50.896 11:37:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:18:50.896 11:37:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:50.896 11:37:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:50.896 11:37:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:50.896 11:37:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:50.896 11:37:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:50.896 11:37:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:50.896 11:37:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:50.896 11:37:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:50.896 11:37:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:50.896 11:37:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:50.896 11:37:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:50.896 11:37:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:50.896 11:37:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:50.896 11:37:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:50.896 11:37:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:50.896 11:37:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:50.896 11:37:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:50.896 11:37:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:50.896 11:37:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:50.896 nvme0n1 00:18:50.896 11:37:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:50.896 11:37:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:50.896 11:37:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:50.896 11:37:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:50.896 11:37:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:50.896 11:37:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:50.896 11:37:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:50.896 11:37:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:50.896 11:37:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:50.896 11:37:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:51.154 11:37:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:51.154 11:37:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:51.154 11:37:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:18:51.154 11:37:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:51.154 11:37:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:18:51.154 11:37:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:18:51.154 11:37:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:18:51.154 11:37:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YjFmMjBiNjk1MzE1ZDk4MmJlOWU1M2JkN2YyNGExM2HwY52g: 00:18:51.154 11:37:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NTgwODYyNGExMGExZTI0ZGYyODhiODY1ZDgxZDUwYjYjGHta: 00:18:51.154 11:37:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:18:51.154 11:37:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:18:51.154 11:37:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YjFmMjBiNjk1MzE1ZDk4MmJlOWU1M2JkN2YyNGExM2HwY52g: 00:18:51.154 11:37:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NTgwODYyNGExMGExZTI0ZGYyODhiODY1ZDgxZDUwYjYjGHta: ]] 00:18:51.154 11:37:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NTgwODYyNGExMGExZTI0ZGYyODhiODY1ZDgxZDUwYjYjGHta: 00:18:51.154 11:37:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:18:51.154 11:37:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:51.154 11:37:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:18:51.154 11:37:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:18:51.154 11:37:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:18:51.154 11:37:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:51.154 11:37:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:51.154 11:37:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:51.154 11:37:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:51.154 11:37:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:51.154 11:37:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:51.154 11:37:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:51.154 11:37:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:51.154 11:37:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:51.154 11:37:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:51.154 11:37:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:51.154 11:37:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:51.154 11:37:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:51.154 11:37:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:51.154 11:37:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:51.154 11:37:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:51.154 11:37:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:51.154 11:37:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:51.154 11:37:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:51.154 nvme0n1 00:18:51.154 11:37:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:51.154 11:37:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:51.154 11:37:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:51.154 11:37:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:51.154 11:37:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:51.154 11:37:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:51.154 11:37:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:51.154 11:37:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:51.154 11:37:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:51.154 11:37:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:51.154 11:37:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:51.154 11:37:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:51.154 11:37:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:18:51.154 11:37:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:51.154 11:37:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:18:51.154 11:37:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:18:51.154 11:37:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:18:51.154 11:37:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZTU4M2MwODViYzk0MDg5YTQxZGJjMmFhMTk5YzI0NDBlOWQwZDAwYjEwOGYxYzc0ff6NVg==: 00:18:51.154 11:37:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZTM2Nzg5ZWY4YjFkMjAxMzUyZWZlMzk0YzVkZGIxYWL8FueM: 00:18:51.154 11:37:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:18:51.154 11:37:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:18:51.154 11:37:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZTU4M2MwODViYzk0MDg5YTQxZGJjMmFhMTk5YzI0NDBlOWQwZDAwYjEwOGYxYzc0ff6NVg==: 00:18:51.154 11:37:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZTM2Nzg5ZWY4YjFkMjAxMzUyZWZlMzk0YzVkZGIxYWL8FueM: ]] 00:18:51.154 11:37:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZTM2Nzg5ZWY4YjFkMjAxMzUyZWZlMzk0YzVkZGIxYWL8FueM: 00:18:51.154 11:37:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:18:51.154 11:37:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:51.154 11:37:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:18:51.154 11:37:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:18:51.154 11:37:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:18:51.154 11:37:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:51.154 11:37:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:51.154 11:37:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:51.154 11:37:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:51.154 11:37:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:51.154 11:37:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:51.154 11:37:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:51.154 11:37:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:51.154 11:37:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:51.154 11:37:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:51.154 11:37:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:51.154 11:37:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:51.154 11:37:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:51.154 11:37:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:51.154 11:37:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:51.154 11:37:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:51.154 11:37:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:18:51.154 11:37:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:51.154 11:37:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:51.412 nvme0n1 00:18:51.412 11:37:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:51.412 11:37:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:51.412 11:37:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:51.412 11:37:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:51.412 11:37:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:51.412 11:37:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:51.412 11:37:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:51.412 11:37:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:51.412 11:37:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:51.412 11:37:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:51.412 11:37:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:51.412 11:37:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:51.412 11:37:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:18:51.412 11:37:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:51.412 11:37:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:18:51.412 11:37:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:18:51.412 11:37:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:18:51.412 11:37:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NWYyMTRjMmNkYmU1YmQ5ZGRlY2ZiM2MxN2Q5MDliYzczY2Y4MjE2ZGE2ZmMyMWY3NWNjMzU1NjcwYmNhMGMxNDRIF/Y=: 00:18:51.412 11:37:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:18:51.412 11:37:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:18:51.412 11:37:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:18:51.412 11:37:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NWYyMTRjMmNkYmU1YmQ5ZGRlY2ZiM2MxN2Q5MDliYzczY2Y4MjE2ZGE2ZmMyMWY3NWNjMzU1NjcwYmNhMGMxNDRIF/Y=: 00:18:51.412 11:37:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:18:51.412 11:37:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:18:51.412 11:37:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:51.412 11:37:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:18:51.412 11:37:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:18:51.412 11:37:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:18:51.412 11:37:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:51.412 11:37:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:51.412 11:37:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:51.412 11:37:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:51.412 11:37:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:51.412 11:37:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:51.412 11:37:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:51.412 11:37:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:51.412 11:37:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:51.412 11:37:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:51.412 11:37:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:51.412 11:37:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:51.412 11:37:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:51.412 11:37:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:51.412 11:37:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:51.412 11:37:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:51.412 11:37:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:18:51.412 11:37:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:51.412 11:37:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:51.670 nvme0n1 00:18:51.670 11:37:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:51.670 11:37:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:51.670 11:37:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:51.670 11:37:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:51.670 11:37:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:51.670 11:37:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:51.670 11:37:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:51.670 11:37:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:51.670 11:37:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:51.670 11:37:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:51.670 11:37:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:51.670 11:37:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:18:51.670 11:37:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:51.670 11:37:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:18:51.670 11:37:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:51.670 11:37:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:18:51.670 11:37:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:18:51.670 11:37:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:18:51.670 11:37:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTU5ODIxYmY5YjgyMjc1NDdmZmIxMmMwMDRlNWZiZmQJ6vNU: 00:18:51.670 11:37:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZjY3ZTNiYzAwOGUxZmMxN2M0OTEzMTEzM2RkNWVlZTM5Nzc2ZWFmNDcwNzQyYTY4NDUxYWM0NWU4MzIwZGVkM8EpFWk=: 00:18:51.670 11:37:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:18:51.670 11:37:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:18:51.670 11:37:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTU5ODIxYmY5YjgyMjc1NDdmZmIxMmMwMDRlNWZiZmQJ6vNU: 00:18:51.670 11:37:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZjY3ZTNiYzAwOGUxZmMxN2M0OTEzMTEzM2RkNWVlZTM5Nzc2ZWFmNDcwNzQyYTY4NDUxYWM0NWU4MzIwZGVkM8EpFWk=: ]] 00:18:51.670 11:37:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZjY3ZTNiYzAwOGUxZmMxN2M0OTEzMTEzM2RkNWVlZTM5Nzc2ZWFmNDcwNzQyYTY4NDUxYWM0NWU4MzIwZGVkM8EpFWk=: 00:18:51.670 11:37:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:18:51.670 11:37:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:51.670 11:37:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:18:51.670 11:37:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:18:51.670 11:37:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:18:51.670 11:37:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:51.670 11:37:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:51.670 11:37:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:51.670 11:37:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:51.670 11:37:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:51.670 11:37:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:51.670 11:37:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:51.670 11:37:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:51.670 11:37:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:51.670 11:37:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:51.670 11:37:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:51.670 11:37:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:51.670 11:37:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:51.670 11:37:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:51.670 11:37:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:51.670 11:37:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:51.670 11:37:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:51.670 11:37:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:51.670 11:37:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:51.928 nvme0n1 00:18:51.928 11:37:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:51.928 11:37:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:51.929 11:37:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:51.929 11:37:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:51.929 11:37:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:51.929 11:37:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:51.929 11:37:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:51.929 11:37:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:51.929 11:37:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:51.929 11:37:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:51.929 11:37:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:51.929 11:37:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:51.929 11:37:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:18:51.929 11:37:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:51.929 11:37:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:18:51.929 11:37:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:18:51.929 11:37:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:18:51.929 11:37:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmY4NDc0NjU4NWNlMWNkMzA4NzY1ODYyNWNjNThlYmJiNmVmMjQ1ZTVmMjM5NTRmpsXYMw==: 00:18:51.929 11:37:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YWQyMDI2ZjEyMWE1YTEyZDgwNDFlODljZjRmZmY0NDJmMTk4ODQ2YTE2NWViM2I0QEy44g==: 00:18:51.929 11:37:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:18:51.929 11:37:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:18:51.929 11:37:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmY4NDc0NjU4NWNlMWNkMzA4NzY1ODYyNWNjNThlYmJiNmVmMjQ1ZTVmMjM5NTRmpsXYMw==: 00:18:51.929 11:37:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YWQyMDI2ZjEyMWE1YTEyZDgwNDFlODljZjRmZmY0NDJmMTk4ODQ2YTE2NWViM2I0QEy44g==: ]] 00:18:51.929 11:37:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YWQyMDI2ZjEyMWE1YTEyZDgwNDFlODljZjRmZmY0NDJmMTk4ODQ2YTE2NWViM2I0QEy44g==: 00:18:51.929 11:37:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:18:51.929 11:37:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:51.929 11:37:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:18:51.929 11:37:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:18:51.929 11:37:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:18:51.929 11:37:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:51.929 11:37:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:51.929 11:37:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:51.929 11:37:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:51.929 11:37:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:51.929 11:37:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:51.929 11:37:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:51.929 11:37:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:51.929 11:37:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:51.929 11:37:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:51.929 11:37:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:51.929 11:37:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:51.929 11:37:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:51.929 11:37:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:51.929 11:37:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:51.929 11:37:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:51.929 11:37:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:51.929 11:37:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:51.929 11:37:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:52.188 nvme0n1 00:18:52.188 11:37:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:52.188 11:37:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:52.188 11:37:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:52.188 11:37:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:52.188 11:37:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:52.188 11:37:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:52.188 11:37:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:52.188 11:37:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:52.188 11:37:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:52.188 11:37:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:52.188 11:37:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:52.188 11:37:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:52.188 11:37:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:18:52.188 11:37:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:52.188 11:37:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:18:52.188 11:37:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:18:52.188 11:37:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:18:52.188 11:37:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YjFmMjBiNjk1MzE1ZDk4MmJlOWU1M2JkN2YyNGExM2HwY52g: 00:18:52.188 11:37:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NTgwODYyNGExMGExZTI0ZGYyODhiODY1ZDgxZDUwYjYjGHta: 00:18:52.188 11:37:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:18:52.188 11:37:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:18:52.188 11:37:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YjFmMjBiNjk1MzE1ZDk4MmJlOWU1M2JkN2YyNGExM2HwY52g: 00:18:52.188 11:37:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NTgwODYyNGExMGExZTI0ZGYyODhiODY1ZDgxZDUwYjYjGHta: ]] 00:18:52.188 11:37:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NTgwODYyNGExMGExZTI0ZGYyODhiODY1ZDgxZDUwYjYjGHta: 00:18:52.188 11:37:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:18:52.188 11:37:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:52.188 11:37:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:18:52.188 11:37:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:18:52.188 11:37:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:18:52.188 11:37:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:52.188 11:37:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:52.188 11:37:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:52.188 11:37:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:52.188 11:37:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:52.188 11:37:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:52.188 11:37:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:52.188 11:37:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:52.188 11:37:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:52.188 11:37:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:52.188 11:37:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:52.188 11:37:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:52.188 11:37:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:52.188 11:37:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:52.188 11:37:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:52.188 11:37:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:52.188 11:37:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:52.188 11:37:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:52.188 11:37:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:52.447 nvme0n1 00:18:52.447 11:37:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:52.447 11:37:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:52.447 11:37:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:52.447 11:37:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:52.447 11:37:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:52.447 11:37:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:52.447 11:37:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:52.447 11:37:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:52.447 11:37:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:52.447 11:37:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:52.447 11:37:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:52.447 11:37:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:52.447 11:37:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:18:52.447 11:37:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:52.447 11:37:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:18:52.447 11:37:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:18:52.447 11:37:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:18:52.447 11:37:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZTU4M2MwODViYzk0MDg5YTQxZGJjMmFhMTk5YzI0NDBlOWQwZDAwYjEwOGYxYzc0ff6NVg==: 00:18:52.447 11:37:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZTM2Nzg5ZWY4YjFkMjAxMzUyZWZlMzk0YzVkZGIxYWL8FueM: 00:18:52.447 11:37:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:18:52.447 11:37:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:18:52.447 11:37:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZTU4M2MwODViYzk0MDg5YTQxZGJjMmFhMTk5YzI0NDBlOWQwZDAwYjEwOGYxYzc0ff6NVg==: 00:18:52.447 11:37:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZTM2Nzg5ZWY4YjFkMjAxMzUyZWZlMzk0YzVkZGIxYWL8FueM: ]] 00:18:52.447 11:37:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZTM2Nzg5ZWY4YjFkMjAxMzUyZWZlMzk0YzVkZGIxYWL8FueM: 00:18:52.447 11:37:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:18:52.447 11:37:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:52.447 11:37:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:18:52.447 11:37:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:18:52.447 11:37:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:18:52.447 11:37:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:52.447 11:37:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:52.447 11:37:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:52.447 11:37:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:52.447 11:37:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:52.447 11:37:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:52.447 11:37:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:52.447 11:37:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:52.447 11:37:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:52.447 11:37:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:52.447 11:37:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:52.447 11:37:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:52.447 11:37:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:52.447 11:37:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:52.447 11:37:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:52.447 11:37:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:52.447 11:37:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:18:52.447 11:37:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:52.447 11:37:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:52.706 nvme0n1 00:18:52.706 11:37:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:52.706 11:37:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:52.706 11:37:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:52.706 11:37:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:52.706 11:37:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:52.706 11:37:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:52.706 11:37:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:52.706 11:37:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:52.706 11:37:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:52.706 11:37:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:52.706 11:37:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:52.706 11:37:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:52.706 11:37:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:18:52.706 11:37:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:52.706 11:37:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:18:52.706 11:37:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:18:52.706 11:37:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:18:52.706 11:37:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NWYyMTRjMmNkYmU1YmQ5ZGRlY2ZiM2MxN2Q5MDliYzczY2Y4MjE2ZGE2ZmMyMWY3NWNjMzU1NjcwYmNhMGMxNDRIF/Y=: 00:18:52.706 11:37:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:18:52.706 11:37:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:18:52.706 11:37:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:18:52.706 11:37:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NWYyMTRjMmNkYmU1YmQ5ZGRlY2ZiM2MxN2Q5MDliYzczY2Y4MjE2ZGE2ZmMyMWY3NWNjMzU1NjcwYmNhMGMxNDRIF/Y=: 00:18:52.706 11:37:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:18:52.706 11:37:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:18:52.706 11:37:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:52.706 11:37:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:18:52.706 11:37:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:18:52.706 11:37:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:18:52.706 11:37:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:52.706 11:37:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:52.706 11:37:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:52.706 11:37:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:52.706 11:37:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:52.706 11:37:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:52.706 11:37:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:52.706 11:37:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:52.706 11:37:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:52.706 11:37:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:52.706 11:37:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:52.706 11:37:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:52.706 11:37:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:52.706 11:37:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:52.706 11:37:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:52.706 11:37:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:52.706 11:37:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:18:52.707 11:37:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:52.707 11:37:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:52.966 nvme0n1 00:18:52.966 11:37:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:52.966 11:37:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:52.966 11:37:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:52.966 11:37:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:52.966 11:37:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:52.966 11:37:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:52.966 11:37:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:52.966 11:37:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:52.966 11:37:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:52.966 11:37:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:52.966 11:37:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:52.966 11:37:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:18:52.966 11:37:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:52.966 11:37:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:18:52.966 11:37:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:52.966 11:37:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:18:52.966 11:37:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:18:52.966 11:37:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:18:52.966 11:37:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTU5ODIxYmY5YjgyMjc1NDdmZmIxMmMwMDRlNWZiZmQJ6vNU: 00:18:52.966 11:37:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZjY3ZTNiYzAwOGUxZmMxN2M0OTEzMTEzM2RkNWVlZTM5Nzc2ZWFmNDcwNzQyYTY4NDUxYWM0NWU4MzIwZGVkM8EpFWk=: 00:18:52.966 11:37:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:18:52.966 11:37:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:18:52.966 11:37:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTU5ODIxYmY5YjgyMjc1NDdmZmIxMmMwMDRlNWZiZmQJ6vNU: 00:18:52.966 11:37:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZjY3ZTNiYzAwOGUxZmMxN2M0OTEzMTEzM2RkNWVlZTM5Nzc2ZWFmNDcwNzQyYTY4NDUxYWM0NWU4MzIwZGVkM8EpFWk=: ]] 00:18:52.966 11:37:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZjY3ZTNiYzAwOGUxZmMxN2M0OTEzMTEzM2RkNWVlZTM5Nzc2ZWFmNDcwNzQyYTY4NDUxYWM0NWU4MzIwZGVkM8EpFWk=: 00:18:52.966 11:37:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:18:52.966 11:37:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:52.966 11:37:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:18:52.966 11:37:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:18:52.966 11:37:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:18:52.966 11:37:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:52.966 11:37:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:52.966 11:37:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:52.966 11:37:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:52.966 11:37:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:52.966 11:37:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:52.966 11:37:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:52.966 11:37:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:52.966 11:37:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:52.966 11:37:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:52.966 11:37:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:52.966 11:37:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:52.966 11:37:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:52.966 11:37:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:52.966 11:37:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:52.966 11:37:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:52.966 11:37:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:52.966 11:37:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:52.966 11:37:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:53.533 nvme0n1 00:18:53.533 11:37:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:53.533 11:37:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:53.533 11:37:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:53.533 11:37:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:53.533 11:37:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:53.533 11:37:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:53.533 11:37:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:53.533 11:37:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:53.533 11:37:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:53.533 11:37:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:53.533 11:37:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:53.533 11:37:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:53.533 11:37:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:18:53.533 11:37:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:53.533 11:37:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:18:53.533 11:37:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:18:53.533 11:37:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:18:53.533 11:37:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmY4NDc0NjU4NWNlMWNkMzA4NzY1ODYyNWNjNThlYmJiNmVmMjQ1ZTVmMjM5NTRmpsXYMw==: 00:18:53.533 11:37:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YWQyMDI2ZjEyMWE1YTEyZDgwNDFlODljZjRmZmY0NDJmMTk4ODQ2YTE2NWViM2I0QEy44g==: 00:18:53.533 11:37:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:18:53.533 11:37:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:18:53.533 11:37:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmY4NDc0NjU4NWNlMWNkMzA4NzY1ODYyNWNjNThlYmJiNmVmMjQ1ZTVmMjM5NTRmpsXYMw==: 00:18:53.533 11:37:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YWQyMDI2ZjEyMWE1YTEyZDgwNDFlODljZjRmZmY0NDJmMTk4ODQ2YTE2NWViM2I0QEy44g==: ]] 00:18:53.533 11:37:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YWQyMDI2ZjEyMWE1YTEyZDgwNDFlODljZjRmZmY0NDJmMTk4ODQ2YTE2NWViM2I0QEy44g==: 00:18:53.533 11:37:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:18:53.533 11:37:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:53.533 11:37:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:18:53.533 11:37:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:18:53.533 11:37:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:18:53.533 11:37:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:53.533 11:37:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:53.533 11:37:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:53.533 11:37:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:53.533 11:37:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:53.533 11:37:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:53.533 11:37:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:53.533 11:37:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:53.533 11:37:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:53.533 11:37:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:53.533 11:37:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:53.533 11:37:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:53.533 11:37:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:53.533 11:37:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:53.533 11:37:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:53.533 11:37:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:53.533 11:37:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:53.533 11:37:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:53.533 11:37:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:53.791 nvme0n1 00:18:53.791 11:37:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:53.791 11:37:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:53.791 11:37:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:53.791 11:37:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:53.791 11:37:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:53.791 11:37:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:54.049 11:37:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:54.049 11:37:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:54.049 11:37:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:54.049 11:37:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:54.049 11:37:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:54.049 11:37:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:54.049 11:37:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:18:54.049 11:37:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:54.049 11:37:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:18:54.049 11:37:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:18:54.049 11:37:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:18:54.049 11:37:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YjFmMjBiNjk1MzE1ZDk4MmJlOWU1M2JkN2YyNGExM2HwY52g: 00:18:54.049 11:37:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NTgwODYyNGExMGExZTI0ZGYyODhiODY1ZDgxZDUwYjYjGHta: 00:18:54.049 11:37:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:18:54.049 11:37:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:18:54.049 11:37:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YjFmMjBiNjk1MzE1ZDk4MmJlOWU1M2JkN2YyNGExM2HwY52g: 00:18:54.049 11:37:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NTgwODYyNGExMGExZTI0ZGYyODhiODY1ZDgxZDUwYjYjGHta: ]] 00:18:54.049 11:37:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NTgwODYyNGExMGExZTI0ZGYyODhiODY1ZDgxZDUwYjYjGHta: 00:18:54.049 11:37:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:18:54.049 11:37:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:54.049 11:37:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:18:54.049 11:37:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:18:54.049 11:37:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:18:54.049 11:37:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:54.049 11:37:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:54.049 11:37:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:54.049 11:37:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:54.049 11:37:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:54.049 11:37:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:54.049 11:37:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:54.049 11:37:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:54.049 11:37:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:54.049 11:37:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:54.049 11:37:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:54.049 11:37:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:54.049 11:37:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:54.049 11:37:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:54.049 11:37:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:54.049 11:37:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:54.049 11:37:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:54.049 11:37:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:54.049 11:37:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:54.308 nvme0n1 00:18:54.308 11:37:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:54.308 11:37:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:54.308 11:37:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:54.308 11:37:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:54.308 11:37:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:54.308 11:37:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:54.308 11:37:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:54.308 11:37:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:54.308 11:37:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:54.308 11:37:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:54.308 11:37:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:54.308 11:37:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:54.308 11:37:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:18:54.308 11:37:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:54.308 11:37:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:18:54.308 11:37:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:18:54.308 11:37:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:18:54.308 11:37:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZTU4M2MwODViYzk0MDg5YTQxZGJjMmFhMTk5YzI0NDBlOWQwZDAwYjEwOGYxYzc0ff6NVg==: 00:18:54.308 11:37:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZTM2Nzg5ZWY4YjFkMjAxMzUyZWZlMzk0YzVkZGIxYWL8FueM: 00:18:54.308 11:37:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:18:54.308 11:37:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:18:54.308 11:37:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZTU4M2MwODViYzk0MDg5YTQxZGJjMmFhMTk5YzI0NDBlOWQwZDAwYjEwOGYxYzc0ff6NVg==: 00:18:54.308 11:37:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZTM2Nzg5ZWY4YjFkMjAxMzUyZWZlMzk0YzVkZGIxYWL8FueM: ]] 00:18:54.308 11:37:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZTM2Nzg5ZWY4YjFkMjAxMzUyZWZlMzk0YzVkZGIxYWL8FueM: 00:18:54.308 11:37:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:18:54.308 11:37:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:54.308 11:37:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:18:54.308 11:37:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:18:54.308 11:37:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:18:54.308 11:37:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:54.308 11:37:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:54.308 11:37:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:54.308 11:37:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:54.308 11:37:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:54.308 11:37:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:54.308 11:37:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:54.308 11:37:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:54.308 11:37:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:54.308 11:37:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:54.308 11:37:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:54.308 11:37:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:54.308 11:37:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:54.308 11:37:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:54.308 11:37:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:54.308 11:37:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:54.308 11:37:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:18:54.308 11:37:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:54.308 11:37:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:54.875 nvme0n1 00:18:54.875 11:37:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:54.875 11:37:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:54.875 11:37:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:54.875 11:37:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:54.875 11:37:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:54.875 11:37:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:54.875 11:37:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:54.875 11:37:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:54.875 11:37:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:54.875 11:37:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:54.875 11:37:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:54.875 11:37:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:54.875 11:37:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:18:54.875 11:37:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:54.875 11:37:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:18:54.875 11:37:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:18:54.875 11:37:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:18:54.875 11:37:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NWYyMTRjMmNkYmU1YmQ5ZGRlY2ZiM2MxN2Q5MDliYzczY2Y4MjE2ZGE2ZmMyMWY3NWNjMzU1NjcwYmNhMGMxNDRIF/Y=: 00:18:54.875 11:37:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:18:54.875 11:37:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:18:54.875 11:37:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:18:54.875 11:37:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NWYyMTRjMmNkYmU1YmQ5ZGRlY2ZiM2MxN2Q5MDliYzczY2Y4MjE2ZGE2ZmMyMWY3NWNjMzU1NjcwYmNhMGMxNDRIF/Y=: 00:18:54.875 11:37:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:18:54.875 11:37:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:18:54.875 11:37:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:54.875 11:37:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:18:54.875 11:37:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:18:54.875 11:37:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:18:54.875 11:37:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:54.875 11:37:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:54.875 11:37:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:54.875 11:37:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:54.875 11:37:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:54.875 11:37:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:54.875 11:37:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:54.875 11:37:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:54.875 11:37:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:54.875 11:37:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:54.875 11:37:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:54.875 11:37:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:54.875 11:37:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:54.875 11:37:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:54.875 11:37:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:54.875 11:37:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:54.875 11:37:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:18:54.875 11:37:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:54.875 11:37:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:55.145 nvme0n1 00:18:55.145 11:37:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:55.145 11:37:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:55.145 11:37:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:55.145 11:37:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:55.145 11:37:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:55.145 11:37:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:55.145 11:37:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:55.145 11:37:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:55.145 11:37:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:55.145 11:37:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:55.411 11:37:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:55.411 11:37:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:18:55.411 11:37:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:55.411 11:37:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:18:55.411 11:37:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:55.411 11:37:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:18:55.411 11:37:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:18:55.411 11:37:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:18:55.411 11:37:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTU5ODIxYmY5YjgyMjc1NDdmZmIxMmMwMDRlNWZiZmQJ6vNU: 00:18:55.411 11:37:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZjY3ZTNiYzAwOGUxZmMxN2M0OTEzMTEzM2RkNWVlZTM5Nzc2ZWFmNDcwNzQyYTY4NDUxYWM0NWU4MzIwZGVkM8EpFWk=: 00:18:55.411 11:37:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:18:55.411 11:37:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:18:55.411 11:37:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTU5ODIxYmY5YjgyMjc1NDdmZmIxMmMwMDRlNWZiZmQJ6vNU: 00:18:55.411 11:37:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZjY3ZTNiYzAwOGUxZmMxN2M0OTEzMTEzM2RkNWVlZTM5Nzc2ZWFmNDcwNzQyYTY4NDUxYWM0NWU4MzIwZGVkM8EpFWk=: ]] 00:18:55.412 11:37:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZjY3ZTNiYzAwOGUxZmMxN2M0OTEzMTEzM2RkNWVlZTM5Nzc2ZWFmNDcwNzQyYTY4NDUxYWM0NWU4MzIwZGVkM8EpFWk=: 00:18:55.412 11:37:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:18:55.412 11:37:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:55.412 11:37:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:18:55.412 11:37:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:18:55.412 11:37:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:18:55.412 11:37:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:55.412 11:37:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:55.412 11:37:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:55.412 11:37:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:55.412 11:37:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:55.412 11:37:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:55.412 11:37:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:55.412 11:37:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:55.412 11:37:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:55.412 11:37:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:55.412 11:37:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:55.412 11:37:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:55.412 11:37:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:55.412 11:37:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:55.412 11:37:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:55.412 11:37:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:55.412 11:37:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:55.412 11:37:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:55.412 11:37:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:55.974 nvme0n1 00:18:55.974 11:37:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:55.974 11:37:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:55.974 11:37:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:55.974 11:37:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:55.974 11:37:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:55.974 11:37:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:55.974 11:37:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:55.974 11:37:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:55.974 11:37:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:55.974 11:37:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:55.974 11:37:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:55.974 11:37:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:55.974 11:37:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:18:55.974 11:37:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:55.974 11:37:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:18:55.974 11:37:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:18:55.974 11:37:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:18:55.974 11:37:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmY4NDc0NjU4NWNlMWNkMzA4NzY1ODYyNWNjNThlYmJiNmVmMjQ1ZTVmMjM5NTRmpsXYMw==: 00:18:55.974 11:37:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YWQyMDI2ZjEyMWE1YTEyZDgwNDFlODljZjRmZmY0NDJmMTk4ODQ2YTE2NWViM2I0QEy44g==: 00:18:55.974 11:37:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:18:55.974 11:37:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:18:55.974 11:37:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmY4NDc0NjU4NWNlMWNkMzA4NzY1ODYyNWNjNThlYmJiNmVmMjQ1ZTVmMjM5NTRmpsXYMw==: 00:18:55.974 11:37:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YWQyMDI2ZjEyMWE1YTEyZDgwNDFlODljZjRmZmY0NDJmMTk4ODQ2YTE2NWViM2I0QEy44g==: ]] 00:18:55.974 11:37:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YWQyMDI2ZjEyMWE1YTEyZDgwNDFlODljZjRmZmY0NDJmMTk4ODQ2YTE2NWViM2I0QEy44g==: 00:18:55.974 11:37:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:18:55.975 11:37:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:55.975 11:37:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:18:55.975 11:37:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:18:55.975 11:37:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:18:55.975 11:37:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:55.975 11:37:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:55.975 11:37:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:55.975 11:37:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:55.975 11:37:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:55.975 11:37:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:55.975 11:37:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:55.975 11:37:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:55.975 11:37:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:55.975 11:37:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:55.975 11:37:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:55.975 11:37:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:55.975 11:37:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:55.975 11:37:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:55.975 11:37:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:55.975 11:37:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:55.975 11:37:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:55.975 11:37:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:55.975 11:37:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:56.539 nvme0n1 00:18:56.539 11:37:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:56.539 11:37:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:56.539 11:37:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:56.539 11:37:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:56.539 11:37:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:56.539 11:37:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:56.797 11:37:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:56.797 11:37:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:56.797 11:37:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:56.797 11:37:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:56.797 11:37:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:56.797 11:37:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:56.797 11:37:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:18:56.797 11:37:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:56.797 11:37:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:18:56.797 11:37:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:18:56.797 11:37:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:18:56.797 11:37:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YjFmMjBiNjk1MzE1ZDk4MmJlOWU1M2JkN2YyNGExM2HwY52g: 00:18:56.797 11:37:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NTgwODYyNGExMGExZTI0ZGYyODhiODY1ZDgxZDUwYjYjGHta: 00:18:56.797 11:37:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:18:56.797 11:37:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:18:56.797 11:37:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YjFmMjBiNjk1MzE1ZDk4MmJlOWU1M2JkN2YyNGExM2HwY52g: 00:18:56.797 11:37:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NTgwODYyNGExMGExZTI0ZGYyODhiODY1ZDgxZDUwYjYjGHta: ]] 00:18:56.797 11:37:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NTgwODYyNGExMGExZTI0ZGYyODhiODY1ZDgxZDUwYjYjGHta: 00:18:56.797 11:37:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:18:56.797 11:37:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:56.797 11:37:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:18:56.797 11:37:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:18:56.797 11:37:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:18:56.797 11:37:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:56.797 11:37:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:56.797 11:37:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:56.797 11:37:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:56.797 11:37:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:56.797 11:37:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:56.797 11:37:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:56.797 11:37:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:56.797 11:37:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:56.797 11:37:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:56.797 11:37:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:56.797 11:37:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:56.797 11:37:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:56.797 11:37:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:56.797 11:37:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:56.797 11:37:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:56.797 11:37:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:56.797 11:37:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:56.797 11:37:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:57.363 nvme0n1 00:18:57.363 11:37:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:57.363 11:37:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:57.363 11:37:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:57.363 11:37:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:57.363 11:37:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:57.363 11:37:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:57.363 11:37:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:57.363 11:37:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:57.363 11:37:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:57.363 11:37:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:57.363 11:37:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:57.363 11:37:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:57.363 11:37:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:18:57.363 11:37:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:57.363 11:37:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:18:57.363 11:37:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:18:57.363 11:37:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:18:57.363 11:37:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZTU4M2MwODViYzk0MDg5YTQxZGJjMmFhMTk5YzI0NDBlOWQwZDAwYjEwOGYxYzc0ff6NVg==: 00:18:57.363 11:37:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZTM2Nzg5ZWY4YjFkMjAxMzUyZWZlMzk0YzVkZGIxYWL8FueM: 00:18:57.363 11:37:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:18:57.363 11:37:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:18:57.363 11:37:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZTU4M2MwODViYzk0MDg5YTQxZGJjMmFhMTk5YzI0NDBlOWQwZDAwYjEwOGYxYzc0ff6NVg==: 00:18:57.363 11:37:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZTM2Nzg5ZWY4YjFkMjAxMzUyZWZlMzk0YzVkZGIxYWL8FueM: ]] 00:18:57.363 11:37:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZTM2Nzg5ZWY4YjFkMjAxMzUyZWZlMzk0YzVkZGIxYWL8FueM: 00:18:57.363 11:37:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:18:57.363 11:37:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:57.363 11:37:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:18:57.363 11:37:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:18:57.363 11:37:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:18:57.363 11:37:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:57.363 11:37:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:57.363 11:37:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:57.363 11:37:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:57.363 11:37:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:57.363 11:37:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:57.363 11:37:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:57.363 11:37:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:57.363 11:37:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:57.363 11:37:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:57.363 11:37:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:57.363 11:37:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:57.363 11:37:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:57.363 11:37:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:57.363 11:37:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:57.363 11:37:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:57.363 11:37:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:18:57.363 11:37:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:57.363 11:37:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:57.928 nvme0n1 00:18:57.928 11:37:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:57.928 11:37:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:57.928 11:37:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:57.928 11:37:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:57.928 11:37:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:58.186 11:37:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:58.186 11:37:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:58.186 11:37:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:58.186 11:37:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:58.186 11:37:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:58.186 11:37:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:58.186 11:37:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:58.186 11:37:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:18:58.186 11:37:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:58.186 11:37:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:18:58.186 11:37:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:18:58.186 11:37:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:18:58.186 11:37:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NWYyMTRjMmNkYmU1YmQ5ZGRlY2ZiM2MxN2Q5MDliYzczY2Y4MjE2ZGE2ZmMyMWY3NWNjMzU1NjcwYmNhMGMxNDRIF/Y=: 00:18:58.186 11:37:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:18:58.186 11:37:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:18:58.186 11:37:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:18:58.186 11:37:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NWYyMTRjMmNkYmU1YmQ5ZGRlY2ZiM2MxN2Q5MDliYzczY2Y4MjE2ZGE2ZmMyMWY3NWNjMzU1NjcwYmNhMGMxNDRIF/Y=: 00:18:58.186 11:37:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:18:58.186 11:37:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:18:58.186 11:37:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:58.186 11:37:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:18:58.186 11:37:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:18:58.186 11:37:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:18:58.186 11:37:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:58.186 11:37:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:58.186 11:37:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:58.186 11:37:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:58.186 11:37:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:58.186 11:37:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:58.186 11:37:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:58.186 11:37:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:58.186 11:37:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:58.186 11:37:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:58.186 11:37:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:58.186 11:37:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:58.186 11:37:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:58.186 11:37:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:58.186 11:37:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:58.186 11:37:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:58.186 11:37:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:18:58.186 11:37:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:58.186 11:37:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:58.750 nvme0n1 00:18:58.750 11:37:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:58.750 11:37:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:58.750 11:37:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:58.750 11:37:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:58.750 11:37:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:58.750 11:37:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:58.750 11:37:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:58.750 11:37:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:58.750 11:37:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:58.750 11:37:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:58.750 11:37:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:58.750 11:37:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:18:58.750 11:37:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:18:58.750 11:37:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:58.750 11:37:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:18:58.750 11:37:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:58.750 11:37:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:18:58.750 11:37:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:18:58.750 11:37:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:18:58.750 11:37:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTU5ODIxYmY5YjgyMjc1NDdmZmIxMmMwMDRlNWZiZmQJ6vNU: 00:18:58.750 11:37:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZjY3ZTNiYzAwOGUxZmMxN2M0OTEzMTEzM2RkNWVlZTM5Nzc2ZWFmNDcwNzQyYTY4NDUxYWM0NWU4MzIwZGVkM8EpFWk=: 00:18:58.750 11:37:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:18:58.750 11:37:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:18:58.750 11:37:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTU5ODIxYmY5YjgyMjc1NDdmZmIxMmMwMDRlNWZiZmQJ6vNU: 00:18:58.750 11:37:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZjY3ZTNiYzAwOGUxZmMxN2M0OTEzMTEzM2RkNWVlZTM5Nzc2ZWFmNDcwNzQyYTY4NDUxYWM0NWU4MzIwZGVkM8EpFWk=: ]] 00:18:58.750 11:37:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZjY3ZTNiYzAwOGUxZmMxN2M0OTEzMTEzM2RkNWVlZTM5Nzc2ZWFmNDcwNzQyYTY4NDUxYWM0NWU4MzIwZGVkM8EpFWk=: 00:18:58.750 11:37:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:18:58.750 11:37:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:58.750 11:37:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:18:58.750 11:37:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:18:58.750 11:37:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:18:58.750 11:37:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:58.750 11:37:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:58.750 11:37:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:58.750 11:37:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:58.750 11:37:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:58.750 11:37:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:58.750 11:37:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:58.750 11:37:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:58.750 11:37:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:58.750 11:37:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:58.750 11:37:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:58.750 11:37:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:58.750 11:37:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:58.750 11:37:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:58.750 11:37:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:58.750 11:37:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:58.750 11:37:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:58.750 11:37:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:58.750 11:37:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:59.008 nvme0n1 00:18:59.008 11:37:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:59.008 11:37:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:59.008 11:37:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:59.008 11:37:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:59.008 11:37:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:59.008 11:37:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:59.008 11:37:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:59.008 11:37:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:59.008 11:37:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:59.008 11:37:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:59.008 11:37:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:59.008 11:37:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:59.008 11:37:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:18:59.008 11:37:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:59.008 11:37:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:18:59.008 11:37:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:18:59.008 11:37:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:18:59.008 11:37:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmY4NDc0NjU4NWNlMWNkMzA4NzY1ODYyNWNjNThlYmJiNmVmMjQ1ZTVmMjM5NTRmpsXYMw==: 00:18:59.008 11:37:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YWQyMDI2ZjEyMWE1YTEyZDgwNDFlODljZjRmZmY0NDJmMTk4ODQ2YTE2NWViM2I0QEy44g==: 00:18:59.008 11:37:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:18:59.008 11:37:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:18:59.008 11:37:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmY4NDc0NjU4NWNlMWNkMzA4NzY1ODYyNWNjNThlYmJiNmVmMjQ1ZTVmMjM5NTRmpsXYMw==: 00:18:59.008 11:37:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YWQyMDI2ZjEyMWE1YTEyZDgwNDFlODljZjRmZmY0NDJmMTk4ODQ2YTE2NWViM2I0QEy44g==: ]] 00:18:59.008 11:37:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YWQyMDI2ZjEyMWE1YTEyZDgwNDFlODljZjRmZmY0NDJmMTk4ODQ2YTE2NWViM2I0QEy44g==: 00:18:59.008 11:37:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:18:59.008 11:37:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:59.008 11:37:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:18:59.008 11:37:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:18:59.008 11:37:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:18:59.008 11:37:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:59.008 11:37:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:59.008 11:37:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:59.008 11:37:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:59.008 11:37:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:59.008 11:37:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:59.008 11:37:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:59.008 11:37:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:59.008 11:37:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:59.008 11:37:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:59.008 11:37:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:59.008 11:37:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:59.008 11:37:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:59.008 11:37:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:59.008 11:37:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:59.008 11:37:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:59.009 11:37:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:59.009 11:37:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:59.009 11:37:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:59.009 nvme0n1 00:18:59.009 11:37:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:59.009 11:37:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:59.009 11:37:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:59.009 11:37:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:59.009 11:37:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:59.009 11:37:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:59.266 11:37:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:59.266 11:37:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:59.266 11:37:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:59.266 11:37:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:59.266 11:37:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:59.266 11:37:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:59.266 11:37:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:18:59.266 11:37:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:59.266 11:37:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:18:59.266 11:37:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:18:59.266 11:37:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:18:59.266 11:37:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YjFmMjBiNjk1MzE1ZDk4MmJlOWU1M2JkN2YyNGExM2HwY52g: 00:18:59.266 11:37:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NTgwODYyNGExMGExZTI0ZGYyODhiODY1ZDgxZDUwYjYjGHta: 00:18:59.266 11:37:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:18:59.266 11:37:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:18:59.266 11:37:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YjFmMjBiNjk1MzE1ZDk4MmJlOWU1M2JkN2YyNGExM2HwY52g: 00:18:59.266 11:37:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NTgwODYyNGExMGExZTI0ZGYyODhiODY1ZDgxZDUwYjYjGHta: ]] 00:18:59.266 11:37:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NTgwODYyNGExMGExZTI0ZGYyODhiODY1ZDgxZDUwYjYjGHta: 00:18:59.266 11:37:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:18:59.266 11:37:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:59.266 11:37:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:18:59.266 11:37:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:18:59.266 11:37:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:18:59.266 11:37:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:59.266 11:37:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:59.266 11:37:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:59.266 11:37:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:59.266 11:37:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:59.266 11:37:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:59.266 11:37:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:59.266 11:37:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:59.266 11:37:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:59.266 11:37:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:59.266 11:37:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:59.266 11:37:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:59.266 11:37:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:59.266 11:37:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:59.266 11:37:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:59.266 11:37:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:59.267 11:37:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:59.267 11:37:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:59.267 11:37:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:59.267 nvme0n1 00:18:59.267 11:37:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:59.267 11:37:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:59.267 11:37:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:59.267 11:37:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:59.267 11:37:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:59.267 11:37:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:59.267 11:37:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:59.267 11:37:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:59.267 11:37:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:59.267 11:37:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:59.267 11:37:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:59.267 11:37:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:59.267 11:37:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:18:59.267 11:37:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:59.267 11:37:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:18:59.267 11:37:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:18:59.267 11:37:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:18:59.267 11:37:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZTU4M2MwODViYzk0MDg5YTQxZGJjMmFhMTk5YzI0NDBlOWQwZDAwYjEwOGYxYzc0ff6NVg==: 00:18:59.267 11:37:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZTM2Nzg5ZWY4YjFkMjAxMzUyZWZlMzk0YzVkZGIxYWL8FueM: 00:18:59.267 11:37:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:18:59.267 11:37:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:18:59.267 11:37:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZTU4M2MwODViYzk0MDg5YTQxZGJjMmFhMTk5YzI0NDBlOWQwZDAwYjEwOGYxYzc0ff6NVg==: 00:18:59.267 11:37:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZTM2Nzg5ZWY4YjFkMjAxMzUyZWZlMzk0YzVkZGIxYWL8FueM: ]] 00:18:59.267 11:37:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZTM2Nzg5ZWY4YjFkMjAxMzUyZWZlMzk0YzVkZGIxYWL8FueM: 00:18:59.267 11:37:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:18:59.267 11:37:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:59.267 11:37:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:18:59.267 11:37:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:18:59.267 11:37:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:18:59.267 11:37:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:59.267 11:37:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:59.267 11:37:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:59.267 11:37:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:59.267 11:37:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:59.267 11:37:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:59.267 11:37:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:59.267 11:37:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:59.267 11:37:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:59.267 11:37:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:59.267 11:37:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:59.267 11:37:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:59.267 11:37:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:59.267 11:37:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:59.267 11:37:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:59.267 11:37:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:59.267 11:37:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:18:59.267 11:37:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:59.267 11:37:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:59.577 nvme0n1 00:18:59.577 11:37:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:59.577 11:37:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:59.577 11:37:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:59.577 11:37:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:59.577 11:37:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:59.577 11:37:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:59.577 11:37:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:59.577 11:37:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:59.577 11:37:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:59.577 11:37:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:59.577 11:37:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:59.577 11:37:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:59.577 11:37:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:18:59.577 11:37:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:59.577 11:37:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:18:59.577 11:37:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:18:59.577 11:37:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:18:59.577 11:37:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NWYyMTRjMmNkYmU1YmQ5ZGRlY2ZiM2MxN2Q5MDliYzczY2Y4MjE2ZGE2ZmMyMWY3NWNjMzU1NjcwYmNhMGMxNDRIF/Y=: 00:18:59.577 11:37:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:18:59.577 11:37:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:18:59.577 11:37:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:18:59.577 11:37:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NWYyMTRjMmNkYmU1YmQ5ZGRlY2ZiM2MxN2Q5MDliYzczY2Y4MjE2ZGE2ZmMyMWY3NWNjMzU1NjcwYmNhMGMxNDRIF/Y=: 00:18:59.577 11:37:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:18:59.577 11:37:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:18:59.577 11:37:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:59.577 11:37:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:18:59.577 11:37:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:18:59.577 11:37:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:18:59.577 11:37:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:59.577 11:37:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:59.577 11:37:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:59.577 11:37:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:59.577 11:37:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:59.577 11:37:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:59.577 11:37:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:59.577 11:37:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:59.578 11:37:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:59.578 11:37:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:59.578 11:37:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:59.578 11:37:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:59.578 11:37:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:59.578 11:37:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:59.578 11:37:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:59.578 11:37:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:59.578 11:37:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:18:59.578 11:37:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:59.578 11:37:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:59.578 nvme0n1 00:18:59.578 11:37:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:59.578 11:37:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:59.578 11:37:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:59.578 11:37:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:59.578 11:37:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:59.578 11:37:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:59.835 11:37:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:59.835 11:37:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:59.835 11:37:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:59.835 11:37:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:59.835 11:37:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:59.835 11:37:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:18:59.835 11:37:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:59.835 11:37:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:18:59.835 11:37:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:59.835 11:37:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:18:59.835 11:37:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:18:59.835 11:37:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:18:59.835 11:37:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTU5ODIxYmY5YjgyMjc1NDdmZmIxMmMwMDRlNWZiZmQJ6vNU: 00:18:59.835 11:37:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZjY3ZTNiYzAwOGUxZmMxN2M0OTEzMTEzM2RkNWVlZTM5Nzc2ZWFmNDcwNzQyYTY4NDUxYWM0NWU4MzIwZGVkM8EpFWk=: 00:18:59.835 11:37:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:18:59.835 11:37:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:18:59.835 11:37:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTU5ODIxYmY5YjgyMjc1NDdmZmIxMmMwMDRlNWZiZmQJ6vNU: 00:18:59.835 11:37:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZjY3ZTNiYzAwOGUxZmMxN2M0OTEzMTEzM2RkNWVlZTM5Nzc2ZWFmNDcwNzQyYTY4NDUxYWM0NWU4MzIwZGVkM8EpFWk=: ]] 00:18:59.835 11:37:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZjY3ZTNiYzAwOGUxZmMxN2M0OTEzMTEzM2RkNWVlZTM5Nzc2ZWFmNDcwNzQyYTY4NDUxYWM0NWU4MzIwZGVkM8EpFWk=: 00:18:59.835 11:37:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:18:59.835 11:37:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:59.835 11:37:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:18:59.835 11:37:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:18:59.835 11:37:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:18:59.835 11:37:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:59.835 11:37:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:59.835 11:37:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:59.835 11:37:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:59.835 11:37:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:59.835 11:37:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:59.835 11:37:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:59.835 11:37:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:59.835 11:37:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:59.835 11:37:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:59.835 11:37:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:59.835 11:37:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:59.835 11:37:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:59.835 11:37:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:59.835 11:37:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:59.835 11:37:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:59.835 11:37:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:59.835 11:37:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:59.835 11:37:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:59.835 nvme0n1 00:18:59.835 11:37:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:59.835 11:37:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:59.835 11:37:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:59.835 11:37:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:59.835 11:37:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:59.835 11:37:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:59.835 11:37:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:59.835 11:37:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:59.835 11:37:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:59.835 11:37:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:59.835 11:37:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:59.835 11:37:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:59.835 11:37:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:18:59.835 11:37:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:59.835 11:37:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:18:59.835 11:37:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:18:59.835 11:37:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:18:59.835 11:37:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmY4NDc0NjU4NWNlMWNkMzA4NzY1ODYyNWNjNThlYmJiNmVmMjQ1ZTVmMjM5NTRmpsXYMw==: 00:18:59.835 11:37:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YWQyMDI2ZjEyMWE1YTEyZDgwNDFlODljZjRmZmY0NDJmMTk4ODQ2YTE2NWViM2I0QEy44g==: 00:18:59.835 11:37:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:18:59.835 11:37:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:18:59.835 11:37:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmY4NDc0NjU4NWNlMWNkMzA4NzY1ODYyNWNjNThlYmJiNmVmMjQ1ZTVmMjM5NTRmpsXYMw==: 00:18:59.835 11:37:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YWQyMDI2ZjEyMWE1YTEyZDgwNDFlODljZjRmZmY0NDJmMTk4ODQ2YTE2NWViM2I0QEy44g==: ]] 00:18:59.835 11:37:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YWQyMDI2ZjEyMWE1YTEyZDgwNDFlODljZjRmZmY0NDJmMTk4ODQ2YTE2NWViM2I0QEy44g==: 00:18:59.835 11:37:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:18:59.835 11:37:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:59.835 11:37:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:18:59.835 11:37:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:18:59.835 11:37:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:18:59.835 11:37:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:59.835 11:37:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:59.835 11:37:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:59.835 11:37:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:59.836 11:37:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:59.836 11:37:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:59.836 11:37:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:59.836 11:37:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:59.836 11:37:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:59.836 11:37:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:59.836 11:37:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:59.836 11:37:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:59.836 11:37:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:59.836 11:37:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:59.836 11:37:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:59.836 11:37:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:59.836 11:37:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:59.836 11:37:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:59.836 11:37:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:00.093 nvme0n1 00:19:00.093 11:37:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:00.093 11:37:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:00.093 11:37:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:00.093 11:37:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:00.093 11:37:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:00.093 11:37:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:00.093 11:37:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:00.093 11:37:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:00.093 11:37:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:00.093 11:37:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:00.093 11:37:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:00.093 11:37:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:00.093 11:37:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:19:00.093 11:37:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:00.093 11:37:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:19:00.093 11:37:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:19:00.093 11:37:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:19:00.093 11:37:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YjFmMjBiNjk1MzE1ZDk4MmJlOWU1M2JkN2YyNGExM2HwY52g: 00:19:00.093 11:37:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NTgwODYyNGExMGExZTI0ZGYyODhiODY1ZDgxZDUwYjYjGHta: 00:19:00.093 11:37:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:19:00.093 11:37:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:19:00.093 11:37:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YjFmMjBiNjk1MzE1ZDk4MmJlOWU1M2JkN2YyNGExM2HwY52g: 00:19:00.093 11:37:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NTgwODYyNGExMGExZTI0ZGYyODhiODY1ZDgxZDUwYjYjGHta: ]] 00:19:00.093 11:37:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NTgwODYyNGExMGExZTI0ZGYyODhiODY1ZDgxZDUwYjYjGHta: 00:19:00.093 11:37:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:19:00.093 11:37:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:00.093 11:37:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:19:00.093 11:37:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:19:00.093 11:37:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:19:00.093 11:37:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:00.093 11:37:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:00.093 11:37:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:00.093 11:37:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:00.093 11:37:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:00.093 11:37:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:00.093 11:37:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:19:00.093 11:37:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:19:00.093 11:37:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:19:00.093 11:37:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:00.093 11:37:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:00.093 11:37:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:19:00.093 11:37:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:00.093 11:37:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:19:00.093 11:37:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:19:00.093 11:37:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:19:00.093 11:37:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:00.093 11:37:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:00.093 11:37:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:00.379 nvme0n1 00:19:00.379 11:37:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:00.380 11:37:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:00.380 11:37:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:00.380 11:37:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:00.380 11:37:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:00.380 11:37:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:00.380 11:37:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:00.380 11:37:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:00.380 11:37:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:00.380 11:37:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:00.380 11:37:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:00.380 11:37:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:00.380 11:37:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:19:00.380 11:37:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:00.380 11:37:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:19:00.380 11:37:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:19:00.380 11:37:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:19:00.380 11:37:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZTU4M2MwODViYzk0MDg5YTQxZGJjMmFhMTk5YzI0NDBlOWQwZDAwYjEwOGYxYzc0ff6NVg==: 00:19:00.380 11:37:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZTM2Nzg5ZWY4YjFkMjAxMzUyZWZlMzk0YzVkZGIxYWL8FueM: 00:19:00.380 11:37:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:19:00.380 11:37:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:19:00.380 11:37:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZTU4M2MwODViYzk0MDg5YTQxZGJjMmFhMTk5YzI0NDBlOWQwZDAwYjEwOGYxYzc0ff6NVg==: 00:19:00.380 11:37:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZTM2Nzg5ZWY4YjFkMjAxMzUyZWZlMzk0YzVkZGIxYWL8FueM: ]] 00:19:00.380 11:37:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZTM2Nzg5ZWY4YjFkMjAxMzUyZWZlMzk0YzVkZGIxYWL8FueM: 00:19:00.380 11:37:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:19:00.380 11:37:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:00.380 11:37:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:19:00.380 11:37:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:19:00.380 11:37:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:19:00.380 11:37:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:00.380 11:37:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:00.380 11:37:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:00.380 11:37:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:00.380 11:37:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:00.380 11:37:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:00.380 11:37:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:19:00.380 11:37:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:19:00.380 11:37:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:19:00.380 11:37:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:00.380 11:37:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:00.380 11:37:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:19:00.380 11:37:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:00.380 11:37:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:19:00.380 11:37:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:19:00.380 11:37:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:19:00.380 11:37:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:19:00.380 11:37:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:00.380 11:37:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:00.380 nvme0n1 00:19:00.380 11:37:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:00.649 11:37:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:00.649 11:37:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:00.649 11:37:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:00.649 11:37:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:00.649 11:37:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:00.649 11:37:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:00.649 11:37:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:00.649 11:37:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:00.649 11:37:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:00.649 11:37:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:00.649 11:37:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:00.649 11:37:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:19:00.649 11:37:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:00.649 11:37:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:19:00.649 11:37:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:19:00.649 11:37:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:19:00.649 11:37:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NWYyMTRjMmNkYmU1YmQ5ZGRlY2ZiM2MxN2Q5MDliYzczY2Y4MjE2ZGE2ZmMyMWY3NWNjMzU1NjcwYmNhMGMxNDRIF/Y=: 00:19:00.649 11:37:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:19:00.649 11:37:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:19:00.649 11:37:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:19:00.649 11:37:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NWYyMTRjMmNkYmU1YmQ5ZGRlY2ZiM2MxN2Q5MDliYzczY2Y4MjE2ZGE2ZmMyMWY3NWNjMzU1NjcwYmNhMGMxNDRIF/Y=: 00:19:00.649 11:37:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:19:00.649 11:37:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:19:00.649 11:37:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:00.649 11:37:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:19:00.649 11:37:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:19:00.649 11:37:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:19:00.649 11:37:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:00.649 11:37:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:00.649 11:37:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:00.649 11:37:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:00.649 11:37:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:00.649 11:37:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:00.649 11:37:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:19:00.649 11:37:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:19:00.649 11:37:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:19:00.649 11:37:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:00.649 11:37:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:00.649 11:37:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:19:00.649 11:37:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:00.649 11:37:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:19:00.649 11:37:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:19:00.649 11:37:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:19:00.649 11:37:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:19:00.649 11:37:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:00.649 11:37:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:00.649 nvme0n1 00:19:00.649 11:37:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:00.649 11:37:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:00.649 11:37:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:00.649 11:37:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:00.649 11:37:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:00.649 11:37:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:00.649 11:37:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:00.649 11:37:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:00.649 11:37:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:00.649 11:37:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:00.649 11:37:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:00.649 11:37:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:19:00.649 11:37:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:00.649 11:37:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:19:00.649 11:37:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:00.649 11:37:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:19:00.649 11:37:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:19:00.649 11:37:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:19:00.649 11:37:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTU5ODIxYmY5YjgyMjc1NDdmZmIxMmMwMDRlNWZiZmQJ6vNU: 00:19:00.649 11:37:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZjY3ZTNiYzAwOGUxZmMxN2M0OTEzMTEzM2RkNWVlZTM5Nzc2ZWFmNDcwNzQyYTY4NDUxYWM0NWU4MzIwZGVkM8EpFWk=: 00:19:00.649 11:37:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:19:00.649 11:37:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:19:00.649 11:37:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTU5ODIxYmY5YjgyMjc1NDdmZmIxMmMwMDRlNWZiZmQJ6vNU: 00:19:00.649 11:37:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZjY3ZTNiYzAwOGUxZmMxN2M0OTEzMTEzM2RkNWVlZTM5Nzc2ZWFmNDcwNzQyYTY4NDUxYWM0NWU4MzIwZGVkM8EpFWk=: ]] 00:19:00.649 11:37:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZjY3ZTNiYzAwOGUxZmMxN2M0OTEzMTEzM2RkNWVlZTM5Nzc2ZWFmNDcwNzQyYTY4NDUxYWM0NWU4MzIwZGVkM8EpFWk=: 00:19:00.649 11:37:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:19:00.649 11:37:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:00.907 11:37:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:19:00.907 11:37:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:19:00.907 11:37:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:19:00.907 11:37:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:00.907 11:37:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:00.907 11:37:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:00.907 11:37:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:00.907 11:37:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:00.907 11:37:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:00.907 11:37:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:19:00.907 11:37:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:19:00.907 11:37:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:19:00.907 11:37:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:00.907 11:37:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:00.907 11:37:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:19:00.907 11:37:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:00.907 11:37:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:19:00.907 11:37:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:19:00.907 11:37:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:19:00.907 11:37:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:00.907 11:37:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:00.907 11:37:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:00.907 nvme0n1 00:19:00.907 11:37:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:00.907 11:37:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:00.907 11:37:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:00.907 11:37:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:00.907 11:37:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:00.907 11:37:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:01.165 11:37:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:01.165 11:37:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:01.165 11:37:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:01.165 11:37:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:01.165 11:37:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:01.165 11:37:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:01.165 11:37:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:19:01.165 11:37:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:01.165 11:37:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:19:01.165 11:37:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:19:01.165 11:37:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:19:01.165 11:37:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmY4NDc0NjU4NWNlMWNkMzA4NzY1ODYyNWNjNThlYmJiNmVmMjQ1ZTVmMjM5NTRmpsXYMw==: 00:19:01.165 11:37:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YWQyMDI2ZjEyMWE1YTEyZDgwNDFlODljZjRmZmY0NDJmMTk4ODQ2YTE2NWViM2I0QEy44g==: 00:19:01.165 11:37:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:19:01.165 11:37:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:19:01.165 11:37:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmY4NDc0NjU4NWNlMWNkMzA4NzY1ODYyNWNjNThlYmJiNmVmMjQ1ZTVmMjM5NTRmpsXYMw==: 00:19:01.165 11:37:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YWQyMDI2ZjEyMWE1YTEyZDgwNDFlODljZjRmZmY0NDJmMTk4ODQ2YTE2NWViM2I0QEy44g==: ]] 00:19:01.165 11:37:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YWQyMDI2ZjEyMWE1YTEyZDgwNDFlODljZjRmZmY0NDJmMTk4ODQ2YTE2NWViM2I0QEy44g==: 00:19:01.165 11:37:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:19:01.165 11:37:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:01.165 11:37:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:19:01.165 11:37:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:19:01.165 11:37:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:19:01.165 11:37:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:01.165 11:37:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:01.165 11:37:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:01.165 11:37:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:01.165 11:37:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:01.165 11:37:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:01.165 11:37:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:19:01.165 11:37:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:19:01.165 11:37:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:19:01.165 11:37:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:01.165 11:37:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:01.165 11:37:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:19:01.165 11:37:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:01.165 11:37:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:19:01.165 11:37:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:19:01.165 11:37:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:19:01.165 11:37:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:01.165 11:37:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:01.165 11:37:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:01.423 nvme0n1 00:19:01.423 11:37:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:01.423 11:37:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:01.423 11:37:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:01.423 11:37:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:01.423 11:37:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:01.423 11:37:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:01.423 11:37:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:01.423 11:37:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:01.423 11:37:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:01.423 11:37:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:01.423 11:37:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:01.423 11:37:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:01.423 11:37:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:19:01.423 11:37:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:01.423 11:37:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:19:01.423 11:37:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:19:01.423 11:37:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:19:01.423 11:37:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YjFmMjBiNjk1MzE1ZDk4MmJlOWU1M2JkN2YyNGExM2HwY52g: 00:19:01.423 11:37:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NTgwODYyNGExMGExZTI0ZGYyODhiODY1ZDgxZDUwYjYjGHta: 00:19:01.423 11:37:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:19:01.423 11:37:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:19:01.423 11:37:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YjFmMjBiNjk1MzE1ZDk4MmJlOWU1M2JkN2YyNGExM2HwY52g: 00:19:01.423 11:37:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NTgwODYyNGExMGExZTI0ZGYyODhiODY1ZDgxZDUwYjYjGHta: ]] 00:19:01.423 11:37:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NTgwODYyNGExMGExZTI0ZGYyODhiODY1ZDgxZDUwYjYjGHta: 00:19:01.423 11:37:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:19:01.423 11:37:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:01.423 11:37:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:19:01.423 11:37:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:19:01.423 11:37:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:19:01.423 11:37:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:01.423 11:37:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:01.423 11:37:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:01.423 11:37:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:01.423 11:37:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:01.423 11:37:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:01.423 11:37:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:19:01.423 11:37:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:19:01.423 11:37:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:19:01.423 11:37:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:01.423 11:37:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:01.423 11:37:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:19:01.424 11:37:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:01.424 11:37:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:19:01.424 11:37:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:19:01.424 11:37:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:19:01.424 11:37:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:01.424 11:37:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:01.424 11:37:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:01.682 nvme0n1 00:19:01.682 11:37:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:01.682 11:37:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:01.682 11:37:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:01.682 11:37:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:01.682 11:37:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:01.682 11:37:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:01.682 11:37:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:01.682 11:37:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:01.683 11:37:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:01.683 11:37:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:01.683 11:37:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:01.683 11:37:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:01.683 11:37:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:19:01.683 11:37:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:01.683 11:37:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:19:01.683 11:37:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:19:01.683 11:37:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:19:01.683 11:37:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZTU4M2MwODViYzk0MDg5YTQxZGJjMmFhMTk5YzI0NDBlOWQwZDAwYjEwOGYxYzc0ff6NVg==: 00:19:01.683 11:37:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZTM2Nzg5ZWY4YjFkMjAxMzUyZWZlMzk0YzVkZGIxYWL8FueM: 00:19:01.683 11:37:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:19:01.683 11:37:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:19:01.683 11:37:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZTU4M2MwODViYzk0MDg5YTQxZGJjMmFhMTk5YzI0NDBlOWQwZDAwYjEwOGYxYzc0ff6NVg==: 00:19:01.683 11:37:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZTM2Nzg5ZWY4YjFkMjAxMzUyZWZlMzk0YzVkZGIxYWL8FueM: ]] 00:19:01.683 11:37:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZTM2Nzg5ZWY4YjFkMjAxMzUyZWZlMzk0YzVkZGIxYWL8FueM: 00:19:01.683 11:37:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:19:01.683 11:37:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:01.683 11:37:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:19:01.683 11:37:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:19:01.683 11:37:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:19:01.683 11:37:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:01.683 11:37:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:01.683 11:37:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:01.683 11:37:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:01.683 11:37:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:01.683 11:37:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:01.683 11:37:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:19:01.683 11:37:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:19:01.683 11:37:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:19:01.683 11:37:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:01.683 11:37:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:01.683 11:37:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:19:01.683 11:37:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:01.683 11:37:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:19:01.683 11:37:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:19:01.683 11:37:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:19:01.683 11:37:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:19:01.683 11:37:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:01.683 11:37:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:01.942 nvme0n1 00:19:01.942 11:37:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:01.942 11:37:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:01.942 11:37:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:01.942 11:37:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:01.942 11:37:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:01.942 11:37:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:01.942 11:37:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:01.942 11:37:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:01.942 11:37:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:01.942 11:37:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:01.942 11:37:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:01.942 11:37:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:01.942 11:37:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:19:01.942 11:37:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:01.942 11:37:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:19:01.942 11:37:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:19:01.942 11:37:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:19:01.942 11:37:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NWYyMTRjMmNkYmU1YmQ5ZGRlY2ZiM2MxN2Q5MDliYzczY2Y4MjE2ZGE2ZmMyMWY3NWNjMzU1NjcwYmNhMGMxNDRIF/Y=: 00:19:01.942 11:37:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:19:01.942 11:37:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:19:01.942 11:37:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:19:01.942 11:37:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NWYyMTRjMmNkYmU1YmQ5ZGRlY2ZiM2MxN2Q5MDliYzczY2Y4MjE2ZGE2ZmMyMWY3NWNjMzU1NjcwYmNhMGMxNDRIF/Y=: 00:19:01.942 11:37:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:19:01.942 11:37:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:19:01.942 11:37:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:01.942 11:37:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:19:01.942 11:37:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:19:01.942 11:37:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:19:01.942 11:37:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:01.942 11:37:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:01.942 11:37:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:01.942 11:37:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:01.942 11:37:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:01.942 11:37:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:01.942 11:37:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:19:01.942 11:37:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:19:01.942 11:37:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:19:01.942 11:37:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:01.942 11:37:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:01.942 11:37:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:19:01.942 11:37:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:01.943 11:37:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:19:01.943 11:37:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:19:01.943 11:37:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:19:01.943 11:37:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:19:01.943 11:37:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:01.943 11:37:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:02.202 nvme0n1 00:19:02.202 11:37:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:02.202 11:37:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:02.202 11:37:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:02.202 11:37:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:02.202 11:37:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:02.202 11:37:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:02.202 11:37:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:02.202 11:37:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:02.202 11:37:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:02.202 11:37:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:02.202 11:37:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:02.202 11:37:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:19:02.202 11:37:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:02.202 11:37:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:19:02.202 11:37:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:02.202 11:37:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:19:02.202 11:37:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:19:02.202 11:37:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:19:02.202 11:37:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTU5ODIxYmY5YjgyMjc1NDdmZmIxMmMwMDRlNWZiZmQJ6vNU: 00:19:02.202 11:37:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZjY3ZTNiYzAwOGUxZmMxN2M0OTEzMTEzM2RkNWVlZTM5Nzc2ZWFmNDcwNzQyYTY4NDUxYWM0NWU4MzIwZGVkM8EpFWk=: 00:19:02.202 11:37:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:19:02.202 11:37:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:19:02.202 11:37:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTU5ODIxYmY5YjgyMjc1NDdmZmIxMmMwMDRlNWZiZmQJ6vNU: 00:19:02.202 11:37:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZjY3ZTNiYzAwOGUxZmMxN2M0OTEzMTEzM2RkNWVlZTM5Nzc2ZWFmNDcwNzQyYTY4NDUxYWM0NWU4MzIwZGVkM8EpFWk=: ]] 00:19:02.202 11:37:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZjY3ZTNiYzAwOGUxZmMxN2M0OTEzMTEzM2RkNWVlZTM5Nzc2ZWFmNDcwNzQyYTY4NDUxYWM0NWU4MzIwZGVkM8EpFWk=: 00:19:02.202 11:37:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:19:02.202 11:37:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:02.202 11:37:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:19:02.202 11:37:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:19:02.202 11:37:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:19:02.202 11:37:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:02.202 11:37:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:02.202 11:37:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:02.202 11:37:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:02.202 11:37:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:02.202 11:37:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:02.202 11:37:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:19:02.202 11:37:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:19:02.202 11:37:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:19:02.202 11:37:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:02.202 11:37:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:02.202 11:37:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:19:02.202 11:37:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:02.202 11:37:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:19:02.202 11:37:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:19:02.202 11:37:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:19:02.202 11:37:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:02.202 11:37:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:02.202 11:37:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:02.769 nvme0n1 00:19:02.769 11:37:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:02.769 11:37:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:02.769 11:37:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:02.769 11:37:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:02.769 11:37:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:02.769 11:37:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:02.769 11:37:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:02.769 11:37:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:02.769 11:37:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:02.769 11:37:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:02.769 11:37:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:02.769 11:37:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:02.769 11:37:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:19:02.769 11:37:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:02.769 11:37:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:19:02.769 11:37:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:19:02.769 11:37:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:19:02.769 11:37:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmY4NDc0NjU4NWNlMWNkMzA4NzY1ODYyNWNjNThlYmJiNmVmMjQ1ZTVmMjM5NTRmpsXYMw==: 00:19:02.769 11:37:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YWQyMDI2ZjEyMWE1YTEyZDgwNDFlODljZjRmZmY0NDJmMTk4ODQ2YTE2NWViM2I0QEy44g==: 00:19:02.769 11:37:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:19:02.769 11:37:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:19:02.769 11:37:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmY4NDc0NjU4NWNlMWNkMzA4NzY1ODYyNWNjNThlYmJiNmVmMjQ1ZTVmMjM5NTRmpsXYMw==: 00:19:02.769 11:37:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YWQyMDI2ZjEyMWE1YTEyZDgwNDFlODljZjRmZmY0NDJmMTk4ODQ2YTE2NWViM2I0QEy44g==: ]] 00:19:02.769 11:37:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YWQyMDI2ZjEyMWE1YTEyZDgwNDFlODljZjRmZmY0NDJmMTk4ODQ2YTE2NWViM2I0QEy44g==: 00:19:02.769 11:37:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:19:02.769 11:37:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:02.769 11:37:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:19:02.769 11:37:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:19:02.769 11:37:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:19:02.769 11:37:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:02.769 11:37:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:02.769 11:37:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:02.769 11:37:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:02.769 11:37:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:02.769 11:37:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:02.769 11:37:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:19:02.769 11:37:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:19:02.769 11:37:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:19:02.769 11:37:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:02.769 11:37:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:02.769 11:37:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:19:02.769 11:37:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:02.769 11:37:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:19:02.769 11:37:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:19:02.769 11:37:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:19:02.769 11:37:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:02.769 11:37:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:02.769 11:37:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:03.028 nvme0n1 00:19:03.028 11:37:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:03.028 11:37:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:03.028 11:37:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:03.028 11:37:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:03.028 11:37:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:03.028 11:37:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:03.028 11:37:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:03.028 11:37:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:03.028 11:37:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:03.028 11:37:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:03.028 11:37:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:03.028 11:37:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:03.028 11:37:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:19:03.028 11:37:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:03.028 11:37:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:19:03.028 11:37:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:19:03.028 11:37:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:19:03.028 11:37:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YjFmMjBiNjk1MzE1ZDk4MmJlOWU1M2JkN2YyNGExM2HwY52g: 00:19:03.028 11:37:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NTgwODYyNGExMGExZTI0ZGYyODhiODY1ZDgxZDUwYjYjGHta: 00:19:03.028 11:37:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:19:03.028 11:37:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:19:03.028 11:37:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YjFmMjBiNjk1MzE1ZDk4MmJlOWU1M2JkN2YyNGExM2HwY52g: 00:19:03.028 11:37:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NTgwODYyNGExMGExZTI0ZGYyODhiODY1ZDgxZDUwYjYjGHta: ]] 00:19:03.028 11:37:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NTgwODYyNGExMGExZTI0ZGYyODhiODY1ZDgxZDUwYjYjGHta: 00:19:03.028 11:37:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:19:03.028 11:37:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:03.028 11:37:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:19:03.028 11:37:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:19:03.028 11:37:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:19:03.028 11:37:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:03.028 11:37:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:03.028 11:37:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:03.028 11:37:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:03.028 11:37:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:03.028 11:37:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:03.028 11:37:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:19:03.028 11:37:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:19:03.028 11:37:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:19:03.028 11:37:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:03.028 11:37:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:03.029 11:37:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:19:03.029 11:37:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:03.029 11:37:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:19:03.029 11:37:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:19:03.029 11:37:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:19:03.029 11:37:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:03.029 11:37:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:03.029 11:37:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:03.595 nvme0n1 00:19:03.595 11:37:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:03.595 11:37:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:03.595 11:37:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:03.595 11:37:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:03.595 11:37:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:03.595 11:37:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:03.595 11:37:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:03.595 11:37:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:03.595 11:37:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:03.595 11:37:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:03.595 11:37:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:03.595 11:37:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:03.595 11:37:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:19:03.595 11:37:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:03.595 11:37:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:19:03.595 11:37:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:19:03.595 11:37:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:19:03.595 11:37:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZTU4M2MwODViYzk0MDg5YTQxZGJjMmFhMTk5YzI0NDBlOWQwZDAwYjEwOGYxYzc0ff6NVg==: 00:19:03.595 11:37:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZTM2Nzg5ZWY4YjFkMjAxMzUyZWZlMzk0YzVkZGIxYWL8FueM: 00:19:03.595 11:37:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:19:03.595 11:37:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:19:03.595 11:37:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZTU4M2MwODViYzk0MDg5YTQxZGJjMmFhMTk5YzI0NDBlOWQwZDAwYjEwOGYxYzc0ff6NVg==: 00:19:03.595 11:37:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZTM2Nzg5ZWY4YjFkMjAxMzUyZWZlMzk0YzVkZGIxYWL8FueM: ]] 00:19:03.595 11:37:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZTM2Nzg5ZWY4YjFkMjAxMzUyZWZlMzk0YzVkZGIxYWL8FueM: 00:19:03.595 11:37:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:19:03.595 11:37:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:03.595 11:37:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:19:03.595 11:37:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:19:03.595 11:37:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:19:03.596 11:37:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:03.596 11:37:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:03.596 11:37:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:03.596 11:37:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:03.596 11:37:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:03.596 11:37:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:03.596 11:37:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:19:03.596 11:37:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:19:03.596 11:37:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:19:03.596 11:37:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:03.596 11:37:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:03.596 11:37:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:19:03.596 11:37:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:03.596 11:37:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:19:03.596 11:37:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:19:03.596 11:37:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:19:03.596 11:37:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:19:03.596 11:37:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:03.596 11:37:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:03.855 nvme0n1 00:19:03.855 11:37:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:03.855 11:37:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:03.855 11:37:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:03.855 11:37:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:03.855 11:37:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:03.855 11:37:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:03.855 11:37:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:03.855 11:37:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:03.855 11:37:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:03.855 11:37:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:04.114 11:37:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:04.114 11:37:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:04.114 11:37:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:19:04.114 11:37:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:04.114 11:37:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:19:04.114 11:37:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:19:04.114 11:37:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:19:04.114 11:37:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NWYyMTRjMmNkYmU1YmQ5ZGRlY2ZiM2MxN2Q5MDliYzczY2Y4MjE2ZGE2ZmMyMWY3NWNjMzU1NjcwYmNhMGMxNDRIF/Y=: 00:19:04.114 11:37:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:19:04.114 11:37:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:19:04.114 11:37:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:19:04.114 11:37:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NWYyMTRjMmNkYmU1YmQ5ZGRlY2ZiM2MxN2Q5MDliYzczY2Y4MjE2ZGE2ZmMyMWY3NWNjMzU1NjcwYmNhMGMxNDRIF/Y=: 00:19:04.114 11:37:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:19:04.114 11:37:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:19:04.114 11:37:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:04.114 11:37:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:19:04.114 11:37:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:19:04.114 11:37:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:19:04.114 11:37:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:04.114 11:37:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:04.114 11:37:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:04.114 11:37:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:04.114 11:37:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:04.114 11:37:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:04.114 11:37:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:19:04.114 11:37:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:19:04.114 11:37:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:19:04.114 11:37:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:04.114 11:37:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:04.114 11:37:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:19:04.114 11:37:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:04.114 11:37:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:19:04.114 11:37:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:19:04.114 11:37:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:19:04.114 11:37:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:19:04.114 11:37:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:04.114 11:37:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:04.373 nvme0n1 00:19:04.373 11:37:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:04.373 11:37:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:04.373 11:37:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:04.373 11:37:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:04.373 11:37:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:04.373 11:37:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:04.373 11:37:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:04.373 11:37:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:04.373 11:37:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:04.373 11:37:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:04.373 11:37:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:04.373 11:37:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:19:04.373 11:37:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:04.373 11:37:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:19:04.373 11:37:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:04.373 11:37:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:19:04.373 11:37:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:19:04.373 11:37:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:19:04.373 11:37:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTU5ODIxYmY5YjgyMjc1NDdmZmIxMmMwMDRlNWZiZmQJ6vNU: 00:19:04.373 11:37:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZjY3ZTNiYzAwOGUxZmMxN2M0OTEzMTEzM2RkNWVlZTM5Nzc2ZWFmNDcwNzQyYTY4NDUxYWM0NWU4MzIwZGVkM8EpFWk=: 00:19:04.373 11:37:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:19:04.373 11:37:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:19:04.373 11:37:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTU5ODIxYmY5YjgyMjc1NDdmZmIxMmMwMDRlNWZiZmQJ6vNU: 00:19:04.373 11:37:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZjY3ZTNiYzAwOGUxZmMxN2M0OTEzMTEzM2RkNWVlZTM5Nzc2ZWFmNDcwNzQyYTY4NDUxYWM0NWU4MzIwZGVkM8EpFWk=: ]] 00:19:04.373 11:37:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZjY3ZTNiYzAwOGUxZmMxN2M0OTEzMTEzM2RkNWVlZTM5Nzc2ZWFmNDcwNzQyYTY4NDUxYWM0NWU4MzIwZGVkM8EpFWk=: 00:19:04.373 11:37:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:19:04.373 11:37:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:04.373 11:37:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:19:04.373 11:37:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:19:04.373 11:37:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:19:04.373 11:37:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:04.373 11:37:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:04.373 11:37:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:04.373 11:37:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:04.373 11:37:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:04.373 11:37:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:04.373 11:37:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:19:04.373 11:37:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:19:04.373 11:37:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:19:04.373 11:37:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:04.373 11:37:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:04.373 11:37:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:19:04.373 11:37:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:04.373 11:37:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:19:04.373 11:37:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:19:04.373 11:37:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:19:04.373 11:37:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:04.373 11:37:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:04.373 11:37:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:05.307 nvme0n1 00:19:05.307 11:37:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:05.307 11:37:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:05.308 11:37:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:05.308 11:37:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:05.308 11:37:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:05.308 11:37:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:05.308 11:37:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:05.308 11:37:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:05.308 11:37:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:05.308 11:37:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:05.308 11:37:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:05.308 11:37:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:05.308 11:37:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:19:05.308 11:37:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:05.308 11:37:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:19:05.308 11:37:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:19:05.308 11:37:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:19:05.308 11:37:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmY4NDc0NjU4NWNlMWNkMzA4NzY1ODYyNWNjNThlYmJiNmVmMjQ1ZTVmMjM5NTRmpsXYMw==: 00:19:05.308 11:37:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YWQyMDI2ZjEyMWE1YTEyZDgwNDFlODljZjRmZmY0NDJmMTk4ODQ2YTE2NWViM2I0QEy44g==: 00:19:05.308 11:37:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:19:05.308 11:37:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:19:05.308 11:37:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmY4NDc0NjU4NWNlMWNkMzA4NzY1ODYyNWNjNThlYmJiNmVmMjQ1ZTVmMjM5NTRmpsXYMw==: 00:19:05.308 11:37:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YWQyMDI2ZjEyMWE1YTEyZDgwNDFlODljZjRmZmY0NDJmMTk4ODQ2YTE2NWViM2I0QEy44g==: ]] 00:19:05.308 11:37:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YWQyMDI2ZjEyMWE1YTEyZDgwNDFlODljZjRmZmY0NDJmMTk4ODQ2YTE2NWViM2I0QEy44g==: 00:19:05.308 11:37:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:19:05.308 11:37:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:05.308 11:37:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:19:05.308 11:37:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:19:05.308 11:37:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:19:05.308 11:37:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:05.308 11:37:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:05.308 11:37:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:05.308 11:37:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:05.308 11:37:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:05.308 11:37:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:05.308 11:37:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:19:05.308 11:37:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:19:05.308 11:37:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:19:05.308 11:37:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:05.308 11:37:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:05.308 11:37:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:19:05.308 11:37:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:05.308 11:37:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:19:05.308 11:37:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:19:05.308 11:37:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:19:05.308 11:37:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:05.308 11:37:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:05.308 11:37:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:05.875 nvme0n1 00:19:05.875 11:37:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:05.875 11:37:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:05.875 11:37:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:05.875 11:37:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:05.875 11:37:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:05.875 11:37:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:05.875 11:37:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:05.875 11:37:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:05.875 11:37:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:05.875 11:37:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:05.875 11:37:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:05.875 11:37:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:05.875 11:37:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:19:05.875 11:37:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:05.875 11:37:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:19:05.875 11:37:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:19:05.875 11:37:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:19:05.875 11:37:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YjFmMjBiNjk1MzE1ZDk4MmJlOWU1M2JkN2YyNGExM2HwY52g: 00:19:05.875 11:37:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NTgwODYyNGExMGExZTI0ZGYyODhiODY1ZDgxZDUwYjYjGHta: 00:19:05.875 11:37:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:19:05.875 11:37:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:19:05.875 11:37:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YjFmMjBiNjk1MzE1ZDk4MmJlOWU1M2JkN2YyNGExM2HwY52g: 00:19:05.875 11:37:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NTgwODYyNGExMGExZTI0ZGYyODhiODY1ZDgxZDUwYjYjGHta: ]] 00:19:05.875 11:37:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NTgwODYyNGExMGExZTI0ZGYyODhiODY1ZDgxZDUwYjYjGHta: 00:19:05.875 11:37:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:19:05.875 11:37:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:05.875 11:37:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:19:05.875 11:37:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:19:05.876 11:37:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:19:05.876 11:37:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:05.876 11:37:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:05.876 11:37:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:05.876 11:37:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:05.876 11:37:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:05.876 11:37:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:05.876 11:37:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:19:05.876 11:37:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:19:05.876 11:37:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:19:05.876 11:37:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:05.876 11:37:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:05.876 11:37:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:19:05.876 11:37:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:05.876 11:37:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:19:05.876 11:37:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:19:05.876 11:37:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:19:05.876 11:37:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:05.876 11:37:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:05.876 11:37:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:06.442 nvme0n1 00:19:06.442 11:37:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:06.442 11:37:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:06.442 11:37:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:06.442 11:37:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:06.442 11:37:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:06.442 11:37:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:06.442 11:37:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:06.442 11:37:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:06.442 11:37:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:06.442 11:37:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:06.701 11:37:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:06.701 11:37:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:06.701 11:37:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:19:06.701 11:37:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:06.701 11:37:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:19:06.701 11:37:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:19:06.701 11:37:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:19:06.701 11:37:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZTU4M2MwODViYzk0MDg5YTQxZGJjMmFhMTk5YzI0NDBlOWQwZDAwYjEwOGYxYzc0ff6NVg==: 00:19:06.701 11:37:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZTM2Nzg5ZWY4YjFkMjAxMzUyZWZlMzk0YzVkZGIxYWL8FueM: 00:19:06.701 11:37:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:19:06.701 11:37:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:19:06.701 11:37:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZTU4M2MwODViYzk0MDg5YTQxZGJjMmFhMTk5YzI0NDBlOWQwZDAwYjEwOGYxYzc0ff6NVg==: 00:19:06.701 11:37:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZTM2Nzg5ZWY4YjFkMjAxMzUyZWZlMzk0YzVkZGIxYWL8FueM: ]] 00:19:06.701 11:37:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZTM2Nzg5ZWY4YjFkMjAxMzUyZWZlMzk0YzVkZGIxYWL8FueM: 00:19:06.701 11:37:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:19:06.701 11:37:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:06.701 11:37:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:19:06.701 11:37:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:19:06.701 11:37:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:19:06.701 11:37:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:06.701 11:37:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:06.701 11:37:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:06.701 11:37:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:06.701 11:37:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:06.701 11:37:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:06.701 11:37:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:19:06.701 11:37:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:19:06.701 11:37:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:19:06.701 11:37:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:06.701 11:37:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:06.701 11:37:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:19:06.701 11:37:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:06.701 11:37:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:19:06.701 11:37:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:19:06.701 11:37:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:19:06.701 11:37:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:19:06.701 11:37:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:06.701 11:37:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:07.268 nvme0n1 00:19:07.269 11:37:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:07.269 11:37:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:07.269 11:37:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:07.269 11:37:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:07.269 11:37:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:07.269 11:37:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:07.269 11:37:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:07.269 11:37:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:07.269 11:37:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:07.269 11:37:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:07.269 11:37:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:07.269 11:37:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:07.269 11:37:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:19:07.269 11:37:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:07.269 11:37:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:19:07.269 11:37:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:19:07.269 11:37:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:19:07.269 11:37:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NWYyMTRjMmNkYmU1YmQ5ZGRlY2ZiM2MxN2Q5MDliYzczY2Y4MjE2ZGE2ZmMyMWY3NWNjMzU1NjcwYmNhMGMxNDRIF/Y=: 00:19:07.269 11:37:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:19:07.269 11:37:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:19:07.269 11:37:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:19:07.269 11:37:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NWYyMTRjMmNkYmU1YmQ5ZGRlY2ZiM2MxN2Q5MDliYzczY2Y4MjE2ZGE2ZmMyMWY3NWNjMzU1NjcwYmNhMGMxNDRIF/Y=: 00:19:07.269 11:37:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:19:07.269 11:37:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:19:07.269 11:37:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:07.269 11:37:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:19:07.269 11:37:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:19:07.269 11:37:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:19:07.269 11:37:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:07.269 11:37:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:07.269 11:37:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:07.269 11:37:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:07.269 11:37:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:07.269 11:37:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:07.269 11:37:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:19:07.269 11:37:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:19:07.269 11:37:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:19:07.269 11:37:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:07.269 11:37:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:07.269 11:37:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:19:07.269 11:37:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:07.269 11:37:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:19:07.269 11:37:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:19:07.269 11:37:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:19:07.269 11:37:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:19:07.269 11:37:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:07.269 11:37:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:07.835 nvme0n1 00:19:07.835 11:37:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:07.835 11:37:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:07.835 11:37:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:07.835 11:37:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:07.835 11:37:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:07.835 11:37:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:08.094 11:37:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:08.094 11:37:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:08.094 11:37:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:08.094 11:37:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:08.094 11:37:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:08.094 11:37:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:19:08.094 11:37:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:08.094 11:37:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:08.094 11:37:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:19:08.094 11:37:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:19:08.094 11:37:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmY4NDc0NjU4NWNlMWNkMzA4NzY1ODYyNWNjNThlYmJiNmVmMjQ1ZTVmMjM5NTRmpsXYMw==: 00:19:08.094 11:37:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YWQyMDI2ZjEyMWE1YTEyZDgwNDFlODljZjRmZmY0NDJmMTk4ODQ2YTE2NWViM2I0QEy44g==: 00:19:08.094 11:37:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:08.095 11:37:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:19:08.095 11:37:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmY4NDc0NjU4NWNlMWNkMzA4NzY1ODYyNWNjNThlYmJiNmVmMjQ1ZTVmMjM5NTRmpsXYMw==: 00:19:08.095 11:37:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YWQyMDI2ZjEyMWE1YTEyZDgwNDFlODljZjRmZmY0NDJmMTk4ODQ2YTE2NWViM2I0QEy44g==: ]] 00:19:08.095 11:37:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YWQyMDI2ZjEyMWE1YTEyZDgwNDFlODljZjRmZmY0NDJmMTk4ODQ2YTE2NWViM2I0QEy44g==: 00:19:08.095 11:37:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:08.095 11:37:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:08.095 11:37:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:08.095 11:37:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:08.095 11:37:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:19:08.095 11:37:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:19:08.095 11:37:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:19:08.095 11:37:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:19:08.095 11:37:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:08.095 11:37:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:08.095 11:37:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:19:08.095 11:37:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:08.095 11:37:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:19:08.095 11:37:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:19:08.095 11:37:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:19:08.095 11:37:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:19:08.095 11:37:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:19:08.095 11:37:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:19:08.095 11:37:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:19:08.095 11:37:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:08.095 11:37:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:19:08.095 11:37:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:08.095 11:37:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:19:08.095 11:37:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:08.095 11:37:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:08.095 2024/07/15 11:37:45 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostnqn:nqn.2024-02.io.spdk:host0 name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2024-02.io.spdk:cnode0 traddr:10.0.0.1 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:19:08.095 request: 00:19:08.095 { 00:19:08.095 "method": "bdev_nvme_attach_controller", 00:19:08.095 "params": { 00:19:08.095 "name": "nvme0", 00:19:08.095 "trtype": "tcp", 00:19:08.095 "traddr": "10.0.0.1", 00:19:08.095 "adrfam": "ipv4", 00:19:08.095 "trsvcid": "4420", 00:19:08.095 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:19:08.095 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:19:08.095 "prchk_reftag": false, 00:19:08.095 "prchk_guard": false, 00:19:08.095 "hdgst": false, 00:19:08.095 "ddgst": false 00:19:08.095 } 00:19:08.095 } 00:19:08.095 Got JSON-RPC error response 00:19:08.095 GoRPCClient: error on JSON-RPC call 00:19:08.095 11:37:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:19:08.095 11:37:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:19:08.095 11:37:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:19:08.095 11:37:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:19:08.095 11:37:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:19:08.095 11:37:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:19:08.095 11:37:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:08.095 11:37:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:19:08.095 11:37:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:08.095 11:37:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:08.095 11:37:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:19:08.095 11:37:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:19:08.095 11:37:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:19:08.095 11:37:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:19:08.095 11:37:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:19:08.095 11:37:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:08.095 11:37:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:08.095 11:37:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:19:08.095 11:37:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:08.095 11:37:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:19:08.095 11:37:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:19:08.095 11:37:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:19:08.095 11:37:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:19:08.095 11:37:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:19:08.095 11:37:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:19:08.095 11:37:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:19:08.095 11:37:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:08.095 11:37:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:19:08.095 11:37:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:08.095 11:37:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:19:08.095 11:37:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:08.095 11:37:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:08.095 2024/07/15 11:37:45 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 ddgst:%!s(bool=false) dhchap_key:key2 hdgst:%!s(bool=false) hostnqn:nqn.2024-02.io.spdk:host0 name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2024-02.io.spdk:cnode0 traddr:10.0.0.1 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:19:08.095 request: 00:19:08.095 { 00:19:08.095 "method": "bdev_nvme_attach_controller", 00:19:08.095 "params": { 00:19:08.095 "name": "nvme0", 00:19:08.095 "trtype": "tcp", 00:19:08.095 "traddr": "10.0.0.1", 00:19:08.095 "adrfam": "ipv4", 00:19:08.095 "trsvcid": "4420", 00:19:08.095 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:19:08.095 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:19:08.095 "prchk_reftag": false, 00:19:08.095 "prchk_guard": false, 00:19:08.095 "hdgst": false, 00:19:08.095 "ddgst": false, 00:19:08.095 "dhchap_key": "key2" 00:19:08.095 } 00:19:08.095 } 00:19:08.095 Got JSON-RPC error response 00:19:08.095 GoRPCClient: error on JSON-RPC call 00:19:08.095 11:37:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:19:08.095 11:37:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:19:08.095 11:37:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:19:08.095 11:37:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:19:08.095 11:37:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:19:08.095 11:37:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:19:08.095 11:37:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:19:08.095 11:37:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:08.095 11:37:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:08.095 11:37:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:08.095 11:37:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:19:08.095 11:37:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:19:08.095 11:37:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:19:08.095 11:37:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:19:08.095 11:37:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:19:08.095 11:37:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:08.095 11:37:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:08.095 11:37:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:19:08.095 11:37:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:08.095 11:37:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:19:08.095 11:37:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:19:08.095 11:37:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:19:08.095 11:37:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:19:08.095 11:37:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:19:08.095 11:37:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:19:08.095 11:37:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:19:08.095 11:37:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:08.095 11:37:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:19:08.354 11:37:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:08.354 11:37:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:19:08.354 11:37:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:08.354 11:37:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:08.354 2024/07/15 11:37:45 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 ddgst:%!s(bool=false) dhchap_ctrlr_key:ckey2 dhchap_key:key1 hdgst:%!s(bool=false) hostnqn:nqn.2024-02.io.spdk:host0 name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2024-02.io.spdk:cnode0 traddr:10.0.0.1 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:19:08.354 request: 00:19:08.354 { 00:19:08.354 "method": "bdev_nvme_attach_controller", 00:19:08.354 "params": { 00:19:08.354 "name": "nvme0", 00:19:08.354 "trtype": "tcp", 00:19:08.354 "traddr": "10.0.0.1", 00:19:08.354 "adrfam": "ipv4", 00:19:08.354 "trsvcid": "4420", 00:19:08.354 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:19:08.354 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:19:08.354 "prchk_reftag": false, 00:19:08.354 "prchk_guard": false, 00:19:08.354 "hdgst": false, 00:19:08.354 "ddgst": false, 00:19:08.354 "dhchap_key": "key1", 00:19:08.354 "dhchap_ctrlr_key": "ckey2" 00:19:08.354 } 00:19:08.354 } 00:19:08.354 Got JSON-RPC error response 00:19:08.354 GoRPCClient: error on JSON-RPC call 00:19:08.354 11:37:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:19:08.354 11:37:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:19:08.354 11:37:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:19:08.354 11:37:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:19:08.354 11:37:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:19:08.354 11:37:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@127 -- # trap - SIGINT SIGTERM EXIT 00:19:08.354 11:37:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@128 -- # cleanup 00:19:08.354 11:37:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:19:08.354 11:37:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:19:08.354 11:37:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@117 -- # sync 00:19:08.354 11:37:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:08.354 11:37:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@120 -- # set +e 00:19:08.354 11:37:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:08.354 11:37:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:08.354 rmmod nvme_tcp 00:19:08.354 rmmod nvme_fabrics 00:19:08.354 11:37:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:08.354 11:37:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@124 -- # set -e 00:19:08.354 11:37:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@125 -- # return 0 00:19:08.354 11:37:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@489 -- # '[' -n 91523 ']' 00:19:08.354 11:37:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@490 -- # killprocess 91523 00:19:08.354 11:37:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@948 -- # '[' -z 91523 ']' 00:19:08.354 11:37:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@952 -- # kill -0 91523 00:19:08.354 11:37:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@953 -- # uname 00:19:08.354 11:37:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:08.354 11:37:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 91523 00:19:08.354 11:37:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:19:08.354 killing process with pid 91523 00:19:08.354 11:37:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:19:08.354 11:37:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@966 -- # echo 'killing process with pid 91523' 00:19:08.354 11:37:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@967 -- # kill 91523 00:19:08.354 11:37:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@972 -- # wait 91523 00:19:08.612 11:37:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:19:08.612 11:37:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:19:08.612 11:37:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:19:08.612 11:37:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:08.612 11:37:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:08.612 11:37:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:08.612 11:37:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:08.612 11:37:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:08.612 11:37:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:19:08.612 11:37:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:19:08.612 11:37:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:19:08.612 11:37:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:19:08.612 11:37:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:19:08.612 11:37:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@686 -- # echo 0 00:19:08.612 11:37:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:19:08.612 11:37:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:19:08.612 11:37:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:19:08.612 11:37:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:19:08.612 11:37:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:19:08.612 11:37:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:19:08.612 11:37:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:19:09.179 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:19:09.179 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:19:09.438 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:19:09.438 11:37:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.CAd /tmp/spdk.key-null.Cm8 /tmp/spdk.key-sha256.zC9 /tmp/spdk.key-sha384.Qqe /tmp/spdk.key-sha512.CS7 /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log 00:19:09.438 11:37:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:19:09.696 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:19:09.696 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:19:09.696 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:19:09.696 00:19:09.696 real 0m35.918s 00:19:09.696 user 0m32.351s 00:19:09.696 sys 0m3.418s 00:19:09.696 11:37:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1124 -- # xtrace_disable 00:19:09.696 ************************************ 00:19:09.696 END TEST nvmf_auth_host 00:19:09.696 ************************************ 00:19:09.696 11:37:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:09.696 11:37:47 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:19:09.696 11:37:47 nvmf_tcp -- nvmf/nvmf.sh@107 -- # [[ tcp == \t\c\p ]] 00:19:09.696 11:37:47 nvmf_tcp -- nvmf/nvmf.sh@108 -- # run_test nvmf_digest /home/vagrant/spdk_repo/spdk/test/nvmf/host/digest.sh --transport=tcp 00:19:09.696 11:37:47 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:19:09.696 11:37:47 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:09.696 11:37:47 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:19:09.696 ************************************ 00:19:09.696 START TEST nvmf_digest 00:19:09.696 ************************************ 00:19:09.696 11:37:47 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/digest.sh --transport=tcp 00:19:09.955 * Looking for test storage... 00:19:09.955 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:19:09.955 11:37:47 nvmf_tcp.nvmf_digest -- host/digest.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:19:09.955 11:37:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:19:09.955 11:37:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:09.955 11:37:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:09.955 11:37:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:09.955 11:37:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:09.955 11:37:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:09.955 11:37:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:09.955 11:37:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:09.955 11:37:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:09.955 11:37:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:09.955 11:37:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:09.955 11:37:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:891080d4-f96c-4735-b9e2-e3ce9892e421 00:19:09.955 11:37:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=891080d4-f96c-4735-b9e2-e3ce9892e421 00:19:09.955 11:37:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:09.955 11:37:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:09.955 11:37:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:19:09.955 11:37:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:09.955 11:37:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:19:09.955 11:37:47 nvmf_tcp.nvmf_digest -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:09.955 11:37:47 nvmf_tcp.nvmf_digest -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:09.955 11:37:47 nvmf_tcp.nvmf_digest -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:09.955 11:37:47 nvmf_tcp.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:09.955 11:37:47 nvmf_tcp.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:09.956 11:37:47 nvmf_tcp.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:09.956 11:37:47 nvmf_tcp.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:19:09.956 11:37:47 nvmf_tcp.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:09.956 11:37:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@47 -- # : 0 00:19:09.956 11:37:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:09.956 11:37:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:09.956 11:37:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:09.956 11:37:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:09.956 11:37:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:09.956 11:37:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:09.956 11:37:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:09.956 11:37:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:09.956 11:37:47 nvmf_tcp.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:19:09.956 11:37:47 nvmf_tcp.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:19:09.956 11:37:47 nvmf_tcp.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:19:09.956 11:37:47 nvmf_tcp.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:19:09.956 11:37:47 nvmf_tcp.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:19:09.956 11:37:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:19:09.956 11:37:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:09.956 11:37:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@448 -- # prepare_net_devs 00:19:09.956 11:37:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@410 -- # local -g is_hw=no 00:19:09.956 11:37:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@412 -- # remove_spdk_ns 00:19:09.956 11:37:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:09.956 11:37:47 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:09.956 11:37:47 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:09.956 11:37:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:19:09.956 11:37:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:19:09.956 11:37:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:19:09.956 11:37:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:19:09.956 11:37:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:19:09.956 11:37:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@432 -- # nvmf_veth_init 00:19:09.956 11:37:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:09.956 11:37:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:09.956 11:37:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:19:09.956 11:37:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:19:09.956 11:37:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:19:09.956 11:37:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:19:09.956 11:37:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:19:09.956 11:37:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:09.956 11:37:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:19:09.956 11:37:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:19:09.956 11:37:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:19:09.956 11:37:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:19:09.956 11:37:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:19:09.956 11:37:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:19:09.956 Cannot find device "nvmf_tgt_br" 00:19:09.956 11:37:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@155 -- # true 00:19:09.956 11:37:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:19:09.956 Cannot find device "nvmf_tgt_br2" 00:19:09.956 11:37:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@156 -- # true 00:19:09.956 11:37:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:19:09.956 11:37:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:19:09.956 Cannot find device "nvmf_tgt_br" 00:19:09.956 11:37:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@158 -- # true 00:19:09.956 11:37:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:19:09.956 Cannot find device "nvmf_tgt_br2" 00:19:09.956 11:37:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@159 -- # true 00:19:09.956 11:37:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:19:09.956 11:37:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:19:09.956 11:37:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:09.956 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:09.956 11:37:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@162 -- # true 00:19:09.956 11:37:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:09.956 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:09.956 11:37:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@163 -- # true 00:19:09.956 11:37:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:19:09.956 11:37:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:19:09.956 11:37:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:19:09.956 11:37:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:19:09.956 11:37:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:19:09.956 11:37:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:19:10.215 11:37:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:19:10.215 11:37:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:19:10.215 11:37:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:19:10.215 11:37:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:19:10.215 11:37:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:19:10.215 11:37:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:19:10.215 11:37:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:19:10.215 11:37:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:19:10.215 11:37:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:19:10.215 11:37:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:19:10.215 11:37:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:19:10.215 11:37:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:19:10.215 11:37:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:19:10.215 11:37:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:19:10.215 11:37:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:19:10.215 11:37:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:19:10.215 11:37:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:19:10.215 11:37:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:19:10.215 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:10.215 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.111 ms 00:19:10.215 00:19:10.215 --- 10.0.0.2 ping statistics --- 00:19:10.215 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:10.215 rtt min/avg/max/mdev = 0.111/0.111/0.111/0.000 ms 00:19:10.215 11:37:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:19:10.215 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:19:10.215 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.095 ms 00:19:10.215 00:19:10.215 --- 10.0.0.3 ping statistics --- 00:19:10.215 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:10.215 rtt min/avg/max/mdev = 0.095/0.095/0.095/0.000 ms 00:19:10.215 11:37:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:19:10.215 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:10.215 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.037 ms 00:19:10.215 00:19:10.215 --- 10.0.0.1 ping statistics --- 00:19:10.215 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:10.215 rtt min/avg/max/mdev = 0.037/0.037/0.037/0.000 ms 00:19:10.215 11:37:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:10.215 11:37:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@433 -- # return 0 00:19:10.215 11:37:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:19:10.215 11:37:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:10.215 11:37:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:19:10.215 11:37:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:19:10.215 11:37:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:10.215 11:37:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:19:10.215 11:37:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:19:10.215 11:37:47 nvmf_tcp.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:19:10.215 11:37:47 nvmf_tcp.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:19:10.215 11:37:47 nvmf_tcp.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:19:10.215 11:37:47 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:19:10.215 11:37:47 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:10.215 11:37:47 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:19:10.215 ************************************ 00:19:10.215 START TEST nvmf_digest_clean 00:19:10.215 ************************************ 00:19:10.215 11:37:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1123 -- # run_digest 00:19:10.215 11:37:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:19:10.215 11:37:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:19:10.215 11:37:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:19:10.215 11:37:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:19:10.215 11:37:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:19:10.215 11:37:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:10.215 11:37:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:10.215 11:37:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:19:10.216 11:37:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@481 -- # nvmfpid=93117 00:19:10.216 11:37:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:19:10.216 11:37:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@482 -- # waitforlisten 93117 00:19:10.216 11:37:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 93117 ']' 00:19:10.216 11:37:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:10.216 11:37:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:10.216 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:10.216 11:37:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:10.216 11:37:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:10.216 11:37:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:19:10.474 [2024-07-15 11:37:47.704849] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:19:10.474 [2024-07-15 11:37:47.704965] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:10.474 [2024-07-15 11:37:47.853176] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:10.474 [2024-07-15 11:37:47.920382] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:10.474 [2024-07-15 11:37:47.920450] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:10.474 [2024-07-15 11:37:47.920471] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:10.474 [2024-07-15 11:37:47.920486] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:10.474 [2024-07-15 11:37:47.920499] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:10.474 [2024-07-15 11:37:47.920538] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:11.409 11:37:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:11.409 11:37:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:19:11.409 11:37:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:11.409 11:37:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:11.409 11:37:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:19:11.409 11:37:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:11.409 11:37:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:19:11.409 11:37:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:19:11.409 11:37:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:19:11.409 11:37:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:11.409 11:37:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:19:11.409 null0 00:19:11.409 [2024-07-15 11:37:48.749247] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:11.409 [2024-07-15 11:37:48.773392] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:11.409 11:37:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:11.409 11:37:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:19:11.409 11:37:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:19:11.409 11:37:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:19:11.409 11:37:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:19:11.409 11:37:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:19:11.409 11:37:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:19:11.409 11:37:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:19:11.409 11:37:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=93168 00:19:11.409 11:37:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:19:11.409 11:37:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 93168 /var/tmp/bperf.sock 00:19:11.409 11:37:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 93168 ']' 00:19:11.409 11:37:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:19:11.409 11:37:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:11.409 11:37:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:19:11.409 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:19:11.409 11:37:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:11.409 11:37:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:19:11.409 [2024-07-15 11:37:48.848829] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:19:11.409 [2024-07-15 11:37:48.848931] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93168 ] 00:19:11.667 [2024-07-15 11:37:48.982771] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:11.667 [2024-07-15 11:37:49.042364] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:11.667 11:37:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:11.667 11:37:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:19:11.667 11:37:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:19:11.667 11:37:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:19:11.667 11:37:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:19:11.926 11:37:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:19:11.926 11:37:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:19:12.492 nvme0n1 00:19:12.492 11:37:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:19:12.492 11:37:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:19:12.492 Running I/O for 2 seconds... 00:19:15.024 00:19:15.024 Latency(us) 00:19:15.024 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:15.024 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:19:15.024 nvme0n1 : 2.01 18051.21 70.51 0.00 0.00 7081.37 3693.85 15609.48 00:19:15.024 =================================================================================================================== 00:19:15.024 Total : 18051.21 70.51 0.00 0.00 7081.37 3693.85 15609.48 00:19:15.024 0 00:19:15.024 11:37:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:19:15.024 11:37:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:19:15.024 11:37:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:19:15.024 11:37:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:19:15.024 11:37:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:19:15.024 | select(.opcode=="crc32c") 00:19:15.024 | "\(.module_name) \(.executed)"' 00:19:15.024 11:37:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:19:15.024 11:37:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:19:15.024 11:37:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:19:15.024 11:37:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:19:15.024 11:37:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 93168 00:19:15.024 11:37:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 93168 ']' 00:19:15.024 11:37:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 93168 00:19:15.024 11:37:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:19:15.024 11:37:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:15.024 11:37:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 93168 00:19:15.024 11:37:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:19:15.024 11:37:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:19:15.024 killing process with pid 93168 00:19:15.024 11:37:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 93168' 00:19:15.024 Received shutdown signal, test time was about 2.000000 seconds 00:19:15.024 00:19:15.024 Latency(us) 00:19:15.024 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:15.024 =================================================================================================================== 00:19:15.024 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:15.024 11:37:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 93168 00:19:15.024 11:37:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 93168 00:19:15.024 11:37:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:19:15.024 11:37:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:19:15.024 11:37:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:19:15.024 11:37:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:19:15.024 11:37:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:19:15.024 11:37:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:19:15.024 11:37:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:19:15.024 11:37:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=93241 00:19:15.024 11:37:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:19:15.024 11:37:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 93241 /var/tmp/bperf.sock 00:19:15.025 11:37:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 93241 ']' 00:19:15.025 11:37:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:19:15.025 11:37:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:15.025 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:19:15.025 11:37:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:19:15.025 11:37:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:15.025 11:37:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:19:15.025 [2024-07-15 11:37:52.428999] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:19:15.025 I/O size of 131072 is greater than zero copy threshold (65536). 00:19:15.025 Zero copy mechanism will not be used. 00:19:15.025 [2024-07-15 11:37:52.429130] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93241 ] 00:19:15.283 [2024-07-15 11:37:52.573164] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:15.283 [2024-07-15 11:37:52.631921] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:16.234 11:37:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:16.234 11:37:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:19:16.234 11:37:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:19:16.234 11:37:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:19:16.234 11:37:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:19:16.494 11:37:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:19:16.494 11:37:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:19:16.753 nvme0n1 00:19:16.753 11:37:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:19:16.753 11:37:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:19:17.011 I/O size of 131072 is greater than zero copy threshold (65536). 00:19:17.011 Zero copy mechanism will not be used. 00:19:17.011 Running I/O for 2 seconds... 00:19:18.914 00:19:18.914 Latency(us) 00:19:18.914 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:18.914 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:19:18.914 nvme0n1 : 2.00 7673.99 959.25 0.00 0.00 2081.01 662.81 8579.26 00:19:18.914 =================================================================================================================== 00:19:18.914 Total : 7673.99 959.25 0.00 0.00 2081.01 662.81 8579.26 00:19:18.914 0 00:19:18.914 11:37:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:19:18.914 11:37:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:19:18.914 11:37:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:19:18.914 11:37:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:19:18.914 11:37:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:19:18.914 | select(.opcode=="crc32c") 00:19:18.914 | "\(.module_name) \(.executed)"' 00:19:19.173 11:37:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:19:19.173 11:37:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:19:19.173 11:37:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:19:19.173 11:37:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:19:19.173 11:37:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 93241 00:19:19.173 11:37:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 93241 ']' 00:19:19.173 11:37:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 93241 00:19:19.173 11:37:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:19:19.173 11:37:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:19.173 11:37:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 93241 00:19:19.173 11:37:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:19:19.173 11:37:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:19:19.173 killing process with pid 93241 00:19:19.173 11:37:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 93241' 00:19:19.173 11:37:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 93241 00:19:19.173 Received shutdown signal, test time was about 2.000000 seconds 00:19:19.173 00:19:19.173 Latency(us) 00:19:19.173 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:19.173 =================================================================================================================== 00:19:19.173 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:19.173 11:37:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 93241 00:19:19.431 11:37:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:19:19.431 11:37:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:19:19.431 11:37:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:19:19.431 11:37:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:19:19.431 11:37:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:19:19.431 11:37:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:19:19.431 11:37:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:19:19.431 11:37:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=93331 00:19:19.431 11:37:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:19:19.431 11:37:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 93331 /var/tmp/bperf.sock 00:19:19.431 11:37:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 93331 ']' 00:19:19.431 11:37:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:19:19.431 11:37:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:19.431 11:37:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:19:19.431 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:19:19.431 11:37:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:19.431 11:37:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:19:19.431 [2024-07-15 11:37:56.781137] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:19:19.431 [2024-07-15 11:37:56.781260] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93331 ] 00:19:19.690 [2024-07-15 11:37:56.922912] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:19.690 [2024-07-15 11:37:57.006202] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:20.623 11:37:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:20.623 11:37:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:19:20.623 11:37:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:19:20.623 11:37:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:19:20.623 11:37:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:19:20.881 11:37:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:19:20.882 11:37:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:19:21.140 nvme0n1 00:19:21.140 11:37:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:19:21.140 11:37:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:19:21.398 Running I/O for 2 seconds... 00:19:23.297 00:19:23.297 Latency(us) 00:19:23.297 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:23.297 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:19:23.297 nvme0n1 : 2.00 21114.62 82.48 0.00 0.00 6055.79 2517.18 10009.13 00:19:23.297 =================================================================================================================== 00:19:23.297 Total : 21114.62 82.48 0.00 0.00 6055.79 2517.18 10009.13 00:19:23.297 0 00:19:23.297 11:38:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:19:23.297 11:38:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:19:23.297 11:38:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:19:23.297 11:38:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:19:23.297 11:38:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:19:23.297 | select(.opcode=="crc32c") 00:19:23.297 | "\(.module_name) \(.executed)"' 00:19:23.555 11:38:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:19:23.555 11:38:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:19:23.555 11:38:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:19:23.555 11:38:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:19:23.555 11:38:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 93331 00:19:23.555 11:38:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 93331 ']' 00:19:23.555 11:38:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 93331 00:19:23.555 11:38:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:19:23.555 11:38:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:23.555 11:38:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 93331 00:19:23.555 11:38:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:19:23.555 11:38:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:19:23.555 killing process with pid 93331 00:19:23.556 11:38:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 93331' 00:19:23.556 11:38:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 93331 00:19:23.556 Received shutdown signal, test time was about 2.000000 seconds 00:19:23.556 00:19:23.556 Latency(us) 00:19:23.556 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:23.556 =================================================================================================================== 00:19:23.556 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:23.556 11:38:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 93331 00:19:23.814 11:38:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:19:23.814 11:38:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:19:23.814 11:38:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:19:23.814 11:38:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:19:23.814 11:38:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:19:23.814 11:38:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:19:23.814 11:38:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:19:23.814 11:38:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=93426 00:19:23.814 11:38:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:19:23.814 11:38:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 93426 /var/tmp/bperf.sock 00:19:23.814 11:38:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 93426 ']' 00:19:23.814 11:38:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:19:23.814 11:38:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:23.814 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:19:23.814 11:38:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:19:23.814 11:38:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:23.814 11:38:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:19:23.814 [2024-07-15 11:38:01.201892] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:19:23.814 I/O size of 131072 is greater than zero copy threshold (65536). 00:19:23.814 Zero copy mechanism will not be used. 00:19:23.814 [2024-07-15 11:38:01.202014] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93426 ] 00:19:24.072 [2024-07-15 11:38:01.342498] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:24.072 [2024-07-15 11:38:01.430181] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:25.012 11:38:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:25.012 11:38:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:19:25.012 11:38:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:19:25.012 11:38:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:19:25.012 11:38:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:19:25.270 11:38:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:19:25.270 11:38:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:19:25.528 nvme0n1 00:19:25.528 11:38:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:19:25.528 11:38:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:19:25.786 I/O size of 131072 is greater than zero copy threshold (65536). 00:19:25.786 Zero copy mechanism will not be used. 00:19:25.786 Running I/O for 2 seconds... 00:19:27.684 00:19:27.684 Latency(us) 00:19:27.684 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:27.684 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:19:27.684 nvme0n1 : 2.00 6451.00 806.37 0.00 0.00 2474.40 1645.85 4349.21 00:19:27.684 =================================================================================================================== 00:19:27.684 Total : 6451.00 806.37 0.00 0.00 2474.40 1645.85 4349.21 00:19:27.684 0 00:19:27.684 11:38:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:19:27.684 11:38:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:19:27.684 11:38:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:19:27.684 11:38:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:19:27.684 | select(.opcode=="crc32c") 00:19:27.684 | "\(.module_name) \(.executed)"' 00:19:27.684 11:38:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:19:27.941 11:38:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:19:27.941 11:38:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:19:27.941 11:38:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:19:27.941 11:38:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:19:27.941 11:38:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 93426 00:19:27.941 11:38:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 93426 ']' 00:19:27.941 11:38:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 93426 00:19:27.941 11:38:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:19:27.941 11:38:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:27.941 11:38:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 93426 00:19:27.941 killing process with pid 93426 00:19:27.941 Received shutdown signal, test time was about 2.000000 seconds 00:19:27.941 00:19:27.941 Latency(us) 00:19:27.941 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:27.941 =================================================================================================================== 00:19:27.941 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:27.941 11:38:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:19:27.941 11:38:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:19:27.941 11:38:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 93426' 00:19:27.941 11:38:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 93426 00:19:27.941 11:38:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 93426 00:19:28.199 11:38:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 93117 00:19:28.199 11:38:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 93117 ']' 00:19:28.199 11:38:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 93117 00:19:28.199 11:38:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:19:28.199 11:38:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:28.199 11:38:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 93117 00:19:28.199 killing process with pid 93117 00:19:28.199 11:38:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:19:28.199 11:38:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:19:28.199 11:38:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 93117' 00:19:28.199 11:38:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 93117 00:19:28.199 11:38:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 93117 00:19:28.457 ************************************ 00:19:28.457 END TEST nvmf_digest_clean 00:19:28.457 ************************************ 00:19:28.457 00:19:28.457 real 0m18.131s 00:19:28.457 user 0m35.224s 00:19:28.457 sys 0m4.359s 00:19:28.457 11:38:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1124 -- # xtrace_disable 00:19:28.457 11:38:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:19:28.457 11:38:05 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1142 -- # return 0 00:19:28.457 11:38:05 nvmf_tcp.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:19:28.457 11:38:05 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:19:28.457 11:38:05 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:28.457 11:38:05 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:19:28.457 ************************************ 00:19:28.457 START TEST nvmf_digest_error 00:19:28.457 ************************************ 00:19:28.457 11:38:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1123 -- # run_digest_error 00:19:28.457 11:38:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:19:28.457 11:38:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:28.457 11:38:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:28.457 11:38:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:19:28.457 11:38:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@481 -- # nvmfpid=93535 00:19:28.457 11:38:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:19:28.457 11:38:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@482 -- # waitforlisten 93535 00:19:28.457 11:38:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 93535 ']' 00:19:28.457 11:38:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:28.457 11:38:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:28.457 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:28.457 11:38:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:28.457 11:38:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:28.457 11:38:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:19:28.457 [2024-07-15 11:38:05.872299] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:19:28.457 [2024-07-15 11:38:05.872419] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:28.715 [2024-07-15 11:38:06.009266] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:28.715 [2024-07-15 11:38:06.078260] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:28.715 [2024-07-15 11:38:06.078327] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:28.715 [2024-07-15 11:38:06.078341] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:28.715 [2024-07-15 11:38:06.078351] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:28.715 [2024-07-15 11:38:06.078360] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:28.715 [2024-07-15 11:38:06.078392] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:29.653 11:38:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:29.653 11:38:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:19:29.653 11:38:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:29.653 11:38:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:29.653 11:38:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:19:29.653 11:38:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:29.653 11:38:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:19:29.653 11:38:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:29.653 11:38:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:19:29.653 [2024-07-15 11:38:06.943014] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:19:29.653 11:38:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:29.653 11:38:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:19:29.653 11:38:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:19:29.653 11:38:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:29.653 11:38:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:19:29.653 null0 00:19:29.653 [2024-07-15 11:38:07.019586] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:29.653 [2024-07-15 11:38:07.043779] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:29.653 11:38:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:29.653 11:38:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:19:29.653 11:38:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:19:29.653 11:38:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:19:29.653 11:38:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:19:29.653 11:38:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:19:29.653 11:38:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=93579 00:19:29.653 11:38:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:19:29.653 11:38:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 93579 /var/tmp/bperf.sock 00:19:29.653 11:38:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 93579 ']' 00:19:29.653 11:38:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:19:29.653 11:38:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:29.653 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:19:29.653 11:38:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:19:29.653 11:38:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:29.653 11:38:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:19:29.653 [2024-07-15 11:38:07.113975] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:19:29.653 [2024-07-15 11:38:07.114106] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93579 ] 00:19:29.912 [2024-07-15 11:38:07.253152] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:29.912 [2024-07-15 11:38:07.320855] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:30.170 11:38:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:30.170 11:38:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:19:30.170 11:38:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:19:30.170 11:38:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:19:30.429 11:38:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:19:30.429 11:38:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:30.429 11:38:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:19:30.429 11:38:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:30.429 11:38:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:19:30.429 11:38:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:19:30.688 nvme0n1 00:19:30.688 11:38:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:19:30.688 11:38:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:30.688 11:38:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:19:30.688 11:38:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:30.688 11:38:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:19:30.688 11:38:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:19:30.946 Running I/O for 2 seconds... 00:19:30.946 [2024-07-15 11:38:08.262836] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5933e0) 00:19:30.947 [2024-07-15 11:38:08.262927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9197 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.947 [2024-07-15 11:38:08.262945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:30.947 [2024-07-15 11:38:08.279189] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5933e0) 00:19:30.947 [2024-07-15 11:38:08.279262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:13422 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.947 [2024-07-15 11:38:08.279279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:30.947 [2024-07-15 11:38:08.294858] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5933e0) 00:19:30.947 [2024-07-15 11:38:08.294940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:9896 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.947 [2024-07-15 11:38:08.294956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:30.947 [2024-07-15 11:38:08.307424] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5933e0) 00:19:30.947 [2024-07-15 11:38:08.307495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:3168 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.947 [2024-07-15 11:38:08.307511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:30.947 [2024-07-15 11:38:08.322542] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5933e0) 00:19:30.947 [2024-07-15 11:38:08.322624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12539 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.947 [2024-07-15 11:38:08.322640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:30.947 [2024-07-15 11:38:08.337490] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5933e0) 00:19:30.947 [2024-07-15 11:38:08.337574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:11023 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.947 [2024-07-15 11:38:08.337591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:30.947 [2024-07-15 11:38:08.352091] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5933e0) 00:19:30.947 [2024-07-15 11:38:08.352171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:17012 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.947 [2024-07-15 11:38:08.352188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:30.947 [2024-07-15 11:38:08.367246] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5933e0) 00:19:30.947 [2024-07-15 11:38:08.367325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:21935 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.947 [2024-07-15 11:38:08.367341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:30.947 [2024-07-15 11:38:08.383924] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5933e0) 00:19:30.947 [2024-07-15 11:38:08.384003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:21661 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.947 [2024-07-15 11:38:08.384020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:30.947 [2024-07-15 11:38:08.397958] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5933e0) 00:19:30.947 [2024-07-15 11:38:08.398031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:1325 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.947 [2024-07-15 11:38:08.398047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:30.947 [2024-07-15 11:38:08.411128] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5933e0) 00:19:30.947 [2024-07-15 11:38:08.411203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:21038 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.947 [2024-07-15 11:38:08.411219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:31.206 [2024-07-15 11:38:08.426184] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5933e0) 00:19:31.206 [2024-07-15 11:38:08.426256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:19165 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.206 [2024-07-15 11:38:08.426273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:31.206 [2024-07-15 11:38:08.441257] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5933e0) 00:19:31.206 [2024-07-15 11:38:08.441338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:24898 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.206 [2024-07-15 11:38:08.441361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:31.206 [2024-07-15 11:38:08.456235] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5933e0) 00:19:31.206 [2024-07-15 11:38:08.456314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:16424 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.206 [2024-07-15 11:38:08.456331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:31.206 [2024-07-15 11:38:08.470488] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5933e0) 00:19:31.206 [2024-07-15 11:38:08.470585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:22657 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.206 [2024-07-15 11:38:08.470603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:31.206 [2024-07-15 11:38:08.485226] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5933e0) 00:19:31.206 [2024-07-15 11:38:08.485318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:2366 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.206 [2024-07-15 11:38:08.485335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:31.206 [2024-07-15 11:38:08.499674] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5933e0) 00:19:31.206 [2024-07-15 11:38:08.499755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:6991 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.206 [2024-07-15 11:38:08.499772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:31.206 [2024-07-15 11:38:08.513734] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5933e0) 00:19:31.206 [2024-07-15 11:38:08.513807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:9928 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.206 [2024-07-15 11:38:08.513825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:31.206 [2024-07-15 11:38:08.528483] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5933e0) 00:19:31.206 [2024-07-15 11:38:08.528572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:6404 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.206 [2024-07-15 11:38:08.528590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:31.206 [2024-07-15 11:38:08.543921] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5933e0) 00:19:31.206 [2024-07-15 11:38:08.543996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:1688 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.206 [2024-07-15 11:38:08.544013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:31.206 [2024-07-15 11:38:08.557412] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5933e0) 00:19:31.206 [2024-07-15 11:38:08.557492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6932 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.206 [2024-07-15 11:38:08.557508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:31.206 [2024-07-15 11:38:08.571254] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5933e0) 00:19:31.206 [2024-07-15 11:38:08.571336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:24401 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.206 [2024-07-15 11:38:08.571353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:31.206 [2024-07-15 11:38:08.587877] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5933e0) 00:19:31.206 [2024-07-15 11:38:08.587957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21057 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.206 [2024-07-15 11:38:08.587974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:31.206 [2024-07-15 11:38:08.602090] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5933e0) 00:19:31.206 [2024-07-15 11:38:08.602168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:5920 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.206 [2024-07-15 11:38:08.602185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:31.206 [2024-07-15 11:38:08.614932] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5933e0) 00:19:31.206 [2024-07-15 11:38:08.615008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:14033 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.206 [2024-07-15 11:38:08.615025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:31.206 [2024-07-15 11:38:08.629756] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5933e0) 00:19:31.206 [2024-07-15 11:38:08.629838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:22891 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.206 [2024-07-15 11:38:08.629855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:31.206 [2024-07-15 11:38:08.645451] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5933e0) 00:19:31.206 [2024-07-15 11:38:08.645527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:19379 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.206 [2024-07-15 11:38:08.645543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:31.206 [2024-07-15 11:38:08.659993] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5933e0) 00:19:31.206 [2024-07-15 11:38:08.660070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:1916 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.206 [2024-07-15 11:38:08.660087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:31.206 [2024-07-15 11:38:08.673247] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5933e0) 00:19:31.206 [2024-07-15 11:38:08.673326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:17593 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.206 [2024-07-15 11:38:08.673343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:31.494 [2024-07-15 11:38:08.690138] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5933e0) 00:19:31.494 [2024-07-15 11:38:08.690227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:4151 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.494 [2024-07-15 11:38:08.690245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:31.494 [2024-07-15 11:38:08.703610] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5933e0) 00:19:31.494 [2024-07-15 11:38:08.703703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:15299 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.494 [2024-07-15 11:38:08.703722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:31.494 [2024-07-15 11:38:08.719002] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5933e0) 00:19:31.494 [2024-07-15 11:38:08.719094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22209 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.494 [2024-07-15 11:38:08.719111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:31.494 [2024-07-15 11:38:08.734741] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5933e0) 00:19:31.494 [2024-07-15 11:38:08.734820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:12526 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.494 [2024-07-15 11:38:08.734837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:31.494 [2024-07-15 11:38:08.749106] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5933e0) 00:19:31.494 [2024-07-15 11:38:08.749193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:4160 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.494 [2024-07-15 11:38:08.749210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:31.494 [2024-07-15 11:38:08.764792] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5933e0) 00:19:31.494 [2024-07-15 11:38:08.764878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:14794 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.494 [2024-07-15 11:38:08.764895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:31.494 [2024-07-15 11:38:08.780103] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5933e0) 00:19:31.494 [2024-07-15 11:38:08.780170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:17527 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.494 [2024-07-15 11:38:08.780187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:31.494 [2024-07-15 11:38:08.793692] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5933e0) 00:19:31.494 [2024-07-15 11:38:08.793784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:11137 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.494 [2024-07-15 11:38:08.793802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:31.494 [2024-07-15 11:38:08.806808] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5933e0) 00:19:31.494 [2024-07-15 11:38:08.806881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:13701 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.494 [2024-07-15 11:38:08.806899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:31.494 [2024-07-15 11:38:08.821983] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5933e0) 00:19:31.494 [2024-07-15 11:38:08.822062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:23569 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.494 [2024-07-15 11:38:08.822079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:31.494 [2024-07-15 11:38:08.836341] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5933e0) 00:19:31.494 [2024-07-15 11:38:08.836415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:5776 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.494 [2024-07-15 11:38:08.836434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:31.494 [2024-07-15 11:38:08.851208] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5933e0) 00:19:31.494 [2024-07-15 11:38:08.851294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:10868 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.494 [2024-07-15 11:38:08.851312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:31.494 [2024-07-15 11:38:08.866339] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5933e0) 00:19:31.494 [2024-07-15 11:38:08.866432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:21875 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.494 [2024-07-15 11:38:08.866453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:31.494 [2024-07-15 11:38:08.881211] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5933e0) 00:19:31.494 [2024-07-15 11:38:08.881289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:25042 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.494 [2024-07-15 11:38:08.881306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:31.494 [2024-07-15 11:38:08.893534] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5933e0) 00:19:31.494 [2024-07-15 11:38:08.893611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:1907 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.494 [2024-07-15 11:38:08.893627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:31.494 [2024-07-15 11:38:08.907255] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5933e0) 00:19:31.494 [2024-07-15 11:38:08.907328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:22454 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.494 [2024-07-15 11:38:08.907344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:31.494 [2024-07-15 11:38:08.923440] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5933e0) 00:19:31.494 [2024-07-15 11:38:08.923514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:12521 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.494 [2024-07-15 11:38:08.923529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:31.494 [2024-07-15 11:38:08.937768] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5933e0) 00:19:31.494 [2024-07-15 11:38:08.937846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:3383 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.494 [2024-07-15 11:38:08.937863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:31.753 [2024-07-15 11:38:08.952037] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5933e0) 00:19:31.753 [2024-07-15 11:38:08.952118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:10230 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.753 [2024-07-15 11:38:08.952135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:31.753 [2024-07-15 11:38:08.967205] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5933e0) 00:19:31.753 [2024-07-15 11:38:08.967283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:4907 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.753 [2024-07-15 11:38:08.967300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:31.753 [2024-07-15 11:38:08.982354] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5933e0) 00:19:31.753 [2024-07-15 11:38:08.982434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:14899 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.753 [2024-07-15 11:38:08.982452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:31.753 [2024-07-15 11:38:08.997415] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5933e0) 00:19:31.753 [2024-07-15 11:38:08.997490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:10585 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.754 [2024-07-15 11:38:08.997509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:31.754 [2024-07-15 11:38:09.012027] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5933e0) 00:19:31.754 [2024-07-15 11:38:09.012107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:2397 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.754 [2024-07-15 11:38:09.012125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:31.754 [2024-07-15 11:38:09.025007] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5933e0) 00:19:31.754 [2024-07-15 11:38:09.025084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:7333 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.754 [2024-07-15 11:38:09.025102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:31.754 [2024-07-15 11:38:09.039894] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5933e0) 00:19:31.754 [2024-07-15 11:38:09.039972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:9998 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.754 [2024-07-15 11:38:09.039990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:31.754 [2024-07-15 11:38:09.053846] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5933e0) 00:19:31.754 [2024-07-15 11:38:09.053938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:24642 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.754 [2024-07-15 11:38:09.053955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:31.754 [2024-07-15 11:38:09.068060] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5933e0) 00:19:31.754 [2024-07-15 11:38:09.068144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:13152 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.754 [2024-07-15 11:38:09.068161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:31.754 [2024-07-15 11:38:09.083498] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5933e0) 00:19:31.754 [2024-07-15 11:38:09.083597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10723 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.754 [2024-07-15 11:38:09.083614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:31.754 [2024-07-15 11:38:09.099141] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5933e0) 00:19:31.754 [2024-07-15 11:38:09.099244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:6824 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.754 [2024-07-15 11:38:09.099262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:31.754 [2024-07-15 11:38:09.113731] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5933e0) 00:19:31.754 [2024-07-15 11:38:09.113807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:12523 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.754 [2024-07-15 11:38:09.113823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:31.754 [2024-07-15 11:38:09.128855] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5933e0) 00:19:31.754 [2024-07-15 11:38:09.128936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:14882 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.754 [2024-07-15 11:38:09.128953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:31.754 [2024-07-15 11:38:09.142720] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5933e0) 00:19:31.754 [2024-07-15 11:38:09.142793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:16537 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.754 [2024-07-15 11:38:09.142809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:31.754 [2024-07-15 11:38:09.154795] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5933e0) 00:19:31.754 [2024-07-15 11:38:09.154899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:5024 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.754 [2024-07-15 11:38:09.154925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:31.754 [2024-07-15 11:38:09.172762] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5933e0) 00:19:31.754 [2024-07-15 11:38:09.172837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:22845 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.754 [2024-07-15 11:38:09.172856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:31.754 [2024-07-15 11:38:09.189213] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5933e0) 00:19:31.754 [2024-07-15 11:38:09.189309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:13370 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.754 [2024-07-15 11:38:09.189329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:31.754 [2024-07-15 11:38:09.204463] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5933e0) 00:19:31.754 [2024-07-15 11:38:09.204562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:1127 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.754 [2024-07-15 11:38:09.204582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:31.754 [2024-07-15 11:38:09.219622] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5933e0) 00:19:31.754 [2024-07-15 11:38:09.219697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:5633 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.754 [2024-07-15 11:38:09.219714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:32.014 [2024-07-15 11:38:09.231708] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5933e0) 00:19:32.014 [2024-07-15 11:38:09.231789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:23570 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.014 [2024-07-15 11:38:09.231805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:32.014 [2024-07-15 11:38:09.245733] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5933e0) 00:19:32.014 [2024-07-15 11:38:09.245810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:14509 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.014 [2024-07-15 11:38:09.245826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:32.014 [2024-07-15 11:38:09.260996] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5933e0) 00:19:32.014 [2024-07-15 11:38:09.261080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:222 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.014 [2024-07-15 11:38:09.261097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:32.014 [2024-07-15 11:38:09.275782] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5933e0) 00:19:32.014 [2024-07-15 11:38:09.275858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:12618 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.014 [2024-07-15 11:38:09.275874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:32.014 [2024-07-15 11:38:09.288933] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5933e0) 00:19:32.014 [2024-07-15 11:38:09.289000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:5941 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.014 [2024-07-15 11:38:09.289017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:32.014 [2024-07-15 11:38:09.304022] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5933e0) 00:19:32.014 [2024-07-15 11:38:09.304102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:24815 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.014 [2024-07-15 11:38:09.304119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:32.014 [2024-07-15 11:38:09.319928] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5933e0) 00:19:32.014 [2024-07-15 11:38:09.320004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:3969 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.014 [2024-07-15 11:38:09.320021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:32.014 [2024-07-15 11:38:09.335932] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5933e0) 00:19:32.014 [2024-07-15 11:38:09.336004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:22121 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.014 [2024-07-15 11:38:09.336021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:32.014 [2024-07-15 11:38:09.349559] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5933e0) 00:19:32.014 [2024-07-15 11:38:09.349632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16733 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.014 [2024-07-15 11:38:09.349649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:32.014 [2024-07-15 11:38:09.364499] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5933e0) 00:19:32.014 [2024-07-15 11:38:09.364619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:12008 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.014 [2024-07-15 11:38:09.364637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:32.014 [2024-07-15 11:38:09.379914] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5933e0) 00:19:32.014 [2024-07-15 11:38:09.379990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:12737 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.014 [2024-07-15 11:38:09.380006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:32.014 [2024-07-15 11:38:09.393437] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5933e0) 00:19:32.014 [2024-07-15 11:38:09.393509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:15802 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.014 [2024-07-15 11:38:09.393525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:32.014 [2024-07-15 11:38:09.408250] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5933e0) 00:19:32.014 [2024-07-15 11:38:09.408322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:20533 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.014 [2024-07-15 11:38:09.408338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:32.014 [2024-07-15 11:38:09.421821] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5933e0) 00:19:32.014 [2024-07-15 11:38:09.421902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:25326 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.014 [2024-07-15 11:38:09.421920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:32.014 [2024-07-15 11:38:09.437830] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5933e0) 00:19:32.014 [2024-07-15 11:38:09.437914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:9983 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.014 [2024-07-15 11:38:09.437931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:32.014 [2024-07-15 11:38:09.451510] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5933e0) 00:19:32.014 [2024-07-15 11:38:09.451607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:12556 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.014 [2024-07-15 11:38:09.451624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:32.014 [2024-07-15 11:38:09.466958] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5933e0) 00:19:32.014 [2024-07-15 11:38:09.467035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2646 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.014 [2024-07-15 11:38:09.467051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:32.014 [2024-07-15 11:38:09.482124] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5933e0) 00:19:32.014 [2024-07-15 11:38:09.482203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:13763 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.014 [2024-07-15 11:38:09.482220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:32.274 [2024-07-15 11:38:09.493000] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5933e0) 00:19:32.274 [2024-07-15 11:38:09.493073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:17037 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.274 [2024-07-15 11:38:09.493089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:32.274 [2024-07-15 11:38:09.509187] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5933e0) 00:19:32.274 [2024-07-15 11:38:09.509263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23962 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.274 [2024-07-15 11:38:09.509279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:32.274 [2024-07-15 11:38:09.524914] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5933e0) 00:19:32.274 [2024-07-15 11:38:09.524993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:15689 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.274 [2024-07-15 11:38:09.525011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:32.274 [2024-07-15 11:38:09.540483] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5933e0) 00:19:32.274 [2024-07-15 11:38:09.540586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:10911 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.274 [2024-07-15 11:38:09.540604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:32.274 [2024-07-15 11:38:09.552934] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5933e0) 00:19:32.274 [2024-07-15 11:38:09.553026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:3500 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.274 [2024-07-15 11:38:09.553042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:32.274 [2024-07-15 11:38:09.571372] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5933e0) 00:19:32.274 [2024-07-15 11:38:09.571478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5653 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.274 [2024-07-15 11:38:09.571498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:32.274 [2024-07-15 11:38:09.588298] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5933e0) 00:19:32.274 [2024-07-15 11:38:09.588388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:25481 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.274 [2024-07-15 11:38:09.588406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:32.274 [2024-07-15 11:38:09.603182] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5933e0) 00:19:32.274 [2024-07-15 11:38:09.603273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:12583 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.274 [2024-07-15 11:38:09.603290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:32.274 [2024-07-15 11:38:09.618196] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5933e0) 00:19:32.274 [2024-07-15 11:38:09.618293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:8591 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.274 [2024-07-15 11:38:09.618310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:32.274 [2024-07-15 11:38:09.633962] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5933e0) 00:19:32.274 [2024-07-15 11:38:09.634051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:152 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.274 [2024-07-15 11:38:09.634069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:32.274 [2024-07-15 11:38:09.646444] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5933e0) 00:19:32.274 [2024-07-15 11:38:09.646534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:1662 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.274 [2024-07-15 11:38:09.646575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:32.274 [2024-07-15 11:38:09.662245] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5933e0) 00:19:32.274 [2024-07-15 11:38:09.662320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19313 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.274 [2024-07-15 11:38:09.662336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:32.274 [2024-07-15 11:38:09.675856] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5933e0) 00:19:32.274 [2024-07-15 11:38:09.675930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:18773 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.274 [2024-07-15 11:38:09.675947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:32.274 [2024-07-15 11:38:09.691740] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5933e0) 00:19:32.274 [2024-07-15 11:38:09.691816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:5297 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.274 [2024-07-15 11:38:09.691832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:32.274 [2024-07-15 11:38:09.707206] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5933e0) 00:19:32.274 [2024-07-15 11:38:09.707302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:19586 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.274 [2024-07-15 11:38:09.707320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:32.274 [2024-07-15 11:38:09.721018] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5933e0) 00:19:32.274 [2024-07-15 11:38:09.721098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:6145 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.274 [2024-07-15 11:38:09.721114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:32.274 [2024-07-15 11:38:09.735565] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5933e0) 00:19:32.274 [2024-07-15 11:38:09.735643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:5138 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.274 [2024-07-15 11:38:09.735659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:32.534 [2024-07-15 11:38:09.750322] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5933e0) 00:19:32.534 [2024-07-15 11:38:09.750397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:8542 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.534 [2024-07-15 11:38:09.750413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:32.534 [2024-07-15 11:38:09.765269] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5933e0) 00:19:32.534 [2024-07-15 11:38:09.765355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:6879 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.534 [2024-07-15 11:38:09.765372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:32.534 [2024-07-15 11:38:09.780492] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5933e0) 00:19:32.534 [2024-07-15 11:38:09.780584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:19405 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.534 [2024-07-15 11:38:09.780603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:32.534 [2024-07-15 11:38:09.795636] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5933e0) 00:19:32.534 [2024-07-15 11:38:09.795722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:25286 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.534 [2024-07-15 11:38:09.795739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:32.534 [2024-07-15 11:38:09.808628] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5933e0) 00:19:32.534 [2024-07-15 11:38:09.808717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:19213 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.534 [2024-07-15 11:38:09.808735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:32.534 [2024-07-15 11:38:09.824315] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5933e0) 00:19:32.534 [2024-07-15 11:38:09.824397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:17358 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.534 [2024-07-15 11:38:09.824415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:32.534 [2024-07-15 11:38:09.839128] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5933e0) 00:19:32.534 [2024-07-15 11:38:09.839208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:61 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.534 [2024-07-15 11:38:09.839227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:32.534 [2024-07-15 11:38:09.852194] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5933e0) 00:19:32.534 [2024-07-15 11:38:09.852271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:11874 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.534 [2024-07-15 11:38:09.852288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:32.534 [2024-07-15 11:38:09.867852] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5933e0) 00:19:32.534 [2024-07-15 11:38:09.867943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:10649 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.534 [2024-07-15 11:38:09.867961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:32.534 [2024-07-15 11:38:09.880937] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5933e0) 00:19:32.534 [2024-07-15 11:38:09.881018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:16505 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.534 [2024-07-15 11:38:09.881035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:32.534 [2024-07-15 11:38:09.895940] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5933e0) 00:19:32.534 [2024-07-15 11:38:09.896015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:7460 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.534 [2024-07-15 11:38:09.896031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:32.534 [2024-07-15 11:38:09.910701] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5933e0) 00:19:32.534 [2024-07-15 11:38:09.910778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21962 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.534 [2024-07-15 11:38:09.910795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:32.534 [2024-07-15 11:38:09.925842] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5933e0) 00:19:32.534 [2024-07-15 11:38:09.925921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:14931 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.534 [2024-07-15 11:38:09.925938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:32.534 [2024-07-15 11:38:09.942130] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5933e0) 00:19:32.534 [2024-07-15 11:38:09.942212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:21241 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.534 [2024-07-15 11:38:09.942231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:32.534 [2024-07-15 11:38:09.957644] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5933e0) 00:19:32.534 [2024-07-15 11:38:09.957725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:2759 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.534 [2024-07-15 11:38:09.957743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:32.534 [2024-07-15 11:38:09.971153] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5933e0) 00:19:32.534 [2024-07-15 11:38:09.971225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:13261 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.534 [2024-07-15 11:38:09.971244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:32.534 [2024-07-15 11:38:09.984724] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5933e0) 00:19:32.534 [2024-07-15 11:38:09.984790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:7670 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.534 [2024-07-15 11:38:09.984806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:32.534 [2024-07-15 11:38:10.000344] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5933e0) 00:19:32.534 [2024-07-15 11:38:10.000430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:5208 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.534 [2024-07-15 11:38:10.000458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:32.794 [2024-07-15 11:38:10.014346] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5933e0) 00:19:32.794 [2024-07-15 11:38:10.014449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:16349 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.794 [2024-07-15 11:38:10.014467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:32.794 [2024-07-15 11:38:10.030417] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5933e0) 00:19:32.794 [2024-07-15 11:38:10.030496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:14636 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.794 [2024-07-15 11:38:10.030513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:32.794 [2024-07-15 11:38:10.043525] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5933e0) 00:19:32.794 [2024-07-15 11:38:10.043611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:6832 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.794 [2024-07-15 11:38:10.043628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:32.794 [2024-07-15 11:38:10.058457] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5933e0) 00:19:32.794 [2024-07-15 11:38:10.058525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:13966 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.794 [2024-07-15 11:38:10.058541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:32.794 [2024-07-15 11:38:10.071350] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5933e0) 00:19:32.794 [2024-07-15 11:38:10.071427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:25482 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.794 [2024-07-15 11:38:10.071444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:32.794 [2024-07-15 11:38:10.085016] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5933e0) 00:19:32.794 [2024-07-15 11:38:10.085088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:17052 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.794 [2024-07-15 11:38:10.085104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:32.794 [2024-07-15 11:38:10.100610] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5933e0) 00:19:32.794 [2024-07-15 11:38:10.100681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:6687 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.794 [2024-07-15 11:38:10.100697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:32.794 [2024-07-15 11:38:10.115781] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5933e0) 00:19:32.794 [2024-07-15 11:38:10.115854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:6790 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.794 [2024-07-15 11:38:10.115872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:32.794 [2024-07-15 11:38:10.130408] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5933e0) 00:19:32.794 [2024-07-15 11:38:10.130472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:17254 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.794 [2024-07-15 11:38:10.130489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:32.794 [2024-07-15 11:38:10.145071] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5933e0) 00:19:32.794 [2024-07-15 11:38:10.145145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18108 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.794 [2024-07-15 11:38:10.145161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:32.794 [2024-07-15 11:38:10.159258] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5933e0) 00:19:32.794 [2024-07-15 11:38:10.159332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:4944 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.794 [2024-07-15 11:38:10.159350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:32.794 [2024-07-15 11:38:10.173529] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5933e0) 00:19:32.794 [2024-07-15 11:38:10.173620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:7705 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.794 [2024-07-15 11:38:10.173637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:32.794 [2024-07-15 11:38:10.189063] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5933e0) 00:19:32.794 [2024-07-15 11:38:10.189129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:7541 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.794 [2024-07-15 11:38:10.189146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:32.794 [2024-07-15 11:38:10.203928] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5933e0) 00:19:32.794 [2024-07-15 11:38:10.204009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:21533 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.794 [2024-07-15 11:38:10.204027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:32.794 [2024-07-15 11:38:10.219053] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5933e0) 00:19:32.794 [2024-07-15 11:38:10.219146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:16577 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.794 [2024-07-15 11:38:10.219163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:32.794 [2024-07-15 11:38:10.234739] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5933e0) 00:19:32.794 [2024-07-15 11:38:10.234817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:19986 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:32.794 [2024-07-15 11:38:10.234834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:32.794 00:19:32.794 Latency(us) 00:19:32.794 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:32.794 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:19:32.794 nvme0n1 : 2.00 17229.66 67.30 0.00 0.00 7420.13 3589.59 20971.52 00:19:32.794 =================================================================================================================== 00:19:32.794 Total : 17229.66 67.30 0.00 0.00 7420.13 3589.59 20971.52 00:19:32.795 0 00:19:32.795 11:38:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:19:32.795 11:38:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:19:32.795 | .driver_specific 00:19:32.795 | .nvme_error 00:19:32.795 | .status_code 00:19:32.795 | .command_transient_transport_error' 00:19:32.795 11:38:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:19:32.795 11:38:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:19:33.054 11:38:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 135 > 0 )) 00:19:33.054 11:38:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 93579 00:19:33.054 11:38:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 93579 ']' 00:19:33.054 11:38:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 93579 00:19:33.054 11:38:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:19:33.054 11:38:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:33.054 11:38:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 93579 00:19:33.313 killing process with pid 93579 00:19:33.313 Received shutdown signal, test time was about 2.000000 seconds 00:19:33.313 00:19:33.313 Latency(us) 00:19:33.313 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:33.313 =================================================================================================================== 00:19:33.313 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:33.313 11:38:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:19:33.313 11:38:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:19:33.313 11:38:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 93579' 00:19:33.313 11:38:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 93579 00:19:33.313 11:38:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 93579 00:19:33.313 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:19:33.313 11:38:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:19:33.313 11:38:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:19:33.313 11:38:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:19:33.313 11:38:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:19:33.313 11:38:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:19:33.313 11:38:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=93656 00:19:33.313 11:38:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 93656 /var/tmp/bperf.sock 00:19:33.313 11:38:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:19:33.313 11:38:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 93656 ']' 00:19:33.313 11:38:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:19:33.313 11:38:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:33.313 11:38:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:19:33.313 11:38:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:33.313 11:38:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:19:33.313 [2024-07-15 11:38:10.776822] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:19:33.313 [2024-07-15 11:38:10.777277] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-aI/O size of 131072 is greater than zero copy threshold (65536). 00:19:33.313 Zero copy mechanism will not be used. 00:19:33.313 llocations --file-prefix=spdk_pid93656 ] 00:19:33.572 [2024-07-15 11:38:10.918577] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:33.572 [2024-07-15 11:38:10.980665] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:34.507 11:38:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:34.507 11:38:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:19:34.507 11:38:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:19:34.507 11:38:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:19:34.765 11:38:12 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:19:34.765 11:38:12 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:34.765 11:38:12 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:19:34.765 11:38:12 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:34.765 11:38:12 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:19:34.765 11:38:12 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:19:35.021 nvme0n1 00:19:35.021 11:38:12 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:19:35.021 11:38:12 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:35.021 11:38:12 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:19:35.021 11:38:12 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:35.021 11:38:12 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:19:35.021 11:38:12 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:19:35.280 I/O size of 131072 is greater than zero copy threshold (65536). 00:19:35.280 Zero copy mechanism will not be used. 00:19:35.280 Running I/O for 2 seconds... 00:19:35.280 [2024-07-15 11:38:12.548995] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15bf380) 00:19:35.280 [2024-07-15 11:38:12.549066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.280 [2024-07-15 11:38:12.549083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:35.280 [2024-07-15 11:38:12.553316] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15bf380) 00:19:35.280 [2024-07-15 11:38:12.553366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.280 [2024-07-15 11:38:12.553382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:35.280 [2024-07-15 11:38:12.558163] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15bf380) 00:19:35.280 [2024-07-15 11:38:12.558213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.280 [2024-07-15 11:38:12.558230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:35.280 [2024-07-15 11:38:12.563154] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15bf380) 00:19:35.280 [2024-07-15 11:38:12.563204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.280 [2024-07-15 11:38:12.563219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:35.280 [2024-07-15 11:38:12.566628] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15bf380) 00:19:35.280 [2024-07-15 11:38:12.566673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.280 [2024-07-15 11:38:12.566689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:35.280 [2024-07-15 11:38:12.570811] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15bf380) 00:19:35.280 [2024-07-15 11:38:12.570859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.280 [2024-07-15 11:38:12.570874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:35.280 [2024-07-15 11:38:12.575134] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15bf380) 00:19:35.280 [2024-07-15 11:38:12.575184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.280 [2024-07-15 11:38:12.575200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:35.280 [2024-07-15 11:38:12.579924] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15bf380) 00:19:35.280 [2024-07-15 11:38:12.579987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.280 [2024-07-15 11:38:12.580003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:35.280 [2024-07-15 11:38:12.583506] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15bf380) 00:19:35.280 [2024-07-15 11:38:12.583587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.280 [2024-07-15 11:38:12.583604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:35.280 [2024-07-15 11:38:12.588892] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15bf380) 00:19:35.280 [2024-07-15 11:38:12.588958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.280 [2024-07-15 11:38:12.588975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:35.280 [2024-07-15 11:38:12.592523] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15bf380) 00:19:35.280 [2024-07-15 11:38:12.592591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.280 [2024-07-15 11:38:12.592606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:35.280 [2024-07-15 11:38:12.597443] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15bf380) 00:19:35.280 [2024-07-15 11:38:12.597507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.280 [2024-07-15 11:38:12.597524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:35.280 [2024-07-15 11:38:12.601131] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15bf380) 00:19:35.280 [2024-07-15 11:38:12.601187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.280 [2024-07-15 11:38:12.601203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:35.280 [2024-07-15 11:38:12.605565] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15bf380) 00:19:35.280 [2024-07-15 11:38:12.605623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.280 [2024-07-15 11:38:12.605639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:35.280 [2024-07-15 11:38:12.609149] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15bf380) 00:19:35.280 [2024-07-15 11:38:12.609210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.280 [2024-07-15 11:38:12.609225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:35.280 [2024-07-15 11:38:12.613503] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15bf380) 00:19:35.280 [2024-07-15 11:38:12.613585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.280 [2024-07-15 11:38:12.613601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:35.280 [2024-07-15 11:38:12.618882] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15bf380) 00:19:35.280 [2024-07-15 11:38:12.618948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.280 [2024-07-15 11:38:12.618964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:35.280 [2024-07-15 11:38:12.623169] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15bf380) 00:19:35.280 [2024-07-15 11:38:12.623229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.280 [2024-07-15 11:38:12.623244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:35.280 [2024-07-15 11:38:12.627578] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15bf380) 00:19:35.280 [2024-07-15 11:38:12.627635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.280 [2024-07-15 11:38:12.627650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:35.280 [2024-07-15 11:38:12.632512] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15bf380) 00:19:35.280 [2024-07-15 11:38:12.632585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.281 [2024-07-15 11:38:12.632600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:35.281 [2024-07-15 11:38:12.635961] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15bf380) 00:19:35.281 [2024-07-15 11:38:12.636011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.281 [2024-07-15 11:38:12.636026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:35.281 [2024-07-15 11:38:12.640834] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15bf380) 00:19:35.281 [2024-07-15 11:38:12.640909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.281 [2024-07-15 11:38:12.640935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:35.281 [2024-07-15 11:38:12.645496] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15bf380) 00:19:35.281 [2024-07-15 11:38:12.645590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.281 [2024-07-15 11:38:12.645607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:35.281 [2024-07-15 11:38:12.649100] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15bf380) 00:19:35.281 [2024-07-15 11:38:12.649166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.281 [2024-07-15 11:38:12.649181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:35.281 [2024-07-15 11:38:12.653156] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15bf380) 00:19:35.281 [2024-07-15 11:38:12.653226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.281 [2024-07-15 11:38:12.653242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:35.281 [2024-07-15 11:38:12.657747] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15bf380) 00:19:35.281 [2024-07-15 11:38:12.657820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.281 [2024-07-15 11:38:12.657835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:35.281 [2024-07-15 11:38:12.662018] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15bf380) 00:19:35.281 [2024-07-15 11:38:12.662088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.281 [2024-07-15 11:38:12.662103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:35.281 [2024-07-15 11:38:12.665702] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15bf380) 00:19:35.281 [2024-07-15 11:38:12.665752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.281 [2024-07-15 11:38:12.665769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:35.281 [2024-07-15 11:38:12.670604] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15bf380) 00:19:35.281 [2024-07-15 11:38:12.670674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.281 [2024-07-15 11:38:12.670690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:35.281 [2024-07-15 11:38:12.675446] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15bf380) 00:19:35.281 [2024-07-15 11:38:12.675519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.281 [2024-07-15 11:38:12.675535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:35.281 [2024-07-15 11:38:12.679944] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15bf380) 00:19:35.281 [2024-07-15 11:38:12.680008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.281 [2024-07-15 11:38:12.680023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:35.281 [2024-07-15 11:38:12.682899] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15bf380) 00:19:35.281 [2024-07-15 11:38:12.682951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.281 [2024-07-15 11:38:12.682966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:35.281 [2024-07-15 11:38:12.687104] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15bf380) 00:19:35.281 [2024-07-15 11:38:12.687177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.281 [2024-07-15 11:38:12.687193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:35.281 [2024-07-15 11:38:12.690933] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15bf380) 00:19:35.281 [2024-07-15 11:38:12.690993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.281 [2024-07-15 11:38:12.691009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:35.281 [2024-07-15 11:38:12.694831] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15bf380) 00:19:35.281 [2024-07-15 11:38:12.694901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.281 [2024-07-15 11:38:12.694916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:35.281 [2024-07-15 11:38:12.699181] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15bf380) 00:19:35.281 [2024-07-15 11:38:12.699256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.281 [2024-07-15 11:38:12.699272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:35.281 [2024-07-15 11:38:12.704668] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15bf380) 00:19:35.281 [2024-07-15 11:38:12.704747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.281 [2024-07-15 11:38:12.704763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:35.281 [2024-07-15 11:38:12.708060] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15bf380) 00:19:35.281 [2024-07-15 11:38:12.708126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.281 [2024-07-15 11:38:12.708141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:35.281 [2024-07-15 11:38:12.712876] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15bf380) 00:19:35.281 [2024-07-15 11:38:12.712955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.281 [2024-07-15 11:38:12.712970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:35.281 [2024-07-15 11:38:12.717081] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15bf380) 00:19:35.281 [2024-07-15 11:38:12.717162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.281 [2024-07-15 11:38:12.717177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:35.281 [2024-07-15 11:38:12.721100] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15bf380) 00:19:35.281 [2024-07-15 11:38:12.721175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.281 [2024-07-15 11:38:12.721190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:35.281 [2024-07-15 11:38:12.725538] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15bf380) 00:19:35.281 [2024-07-15 11:38:12.725631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.281 [2024-07-15 11:38:12.725646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:35.281 [2024-07-15 11:38:12.730150] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15bf380) 00:19:35.281 [2024-07-15 11:38:12.730233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.281 [2024-07-15 11:38:12.730248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:35.281 [2024-07-15 11:38:12.735579] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15bf380) 00:19:35.281 [2024-07-15 11:38:12.735654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.281 [2024-07-15 11:38:12.735669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:35.281 [2024-07-15 11:38:12.741355] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15bf380) 00:19:35.281 [2024-07-15 11:38:12.741438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.281 [2024-07-15 11:38:12.741455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:35.281 [2024-07-15 11:38:12.744719] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15bf380) 00:19:35.281 [2024-07-15 11:38:12.744781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.281 [2024-07-15 11:38:12.744796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:35.281 [2024-07-15 11:38:12.749007] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15bf380) 00:19:35.281 [2024-07-15 11:38:12.749078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.281 [2024-07-15 11:38:12.749094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:35.281 [2024-07-15 11:38:12.753825] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15bf380) 00:19:35.281 [2024-07-15 11:38:12.753887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.281 [2024-07-15 11:38:12.753904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:35.540 [2024-07-15 11:38:12.758382] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15bf380) 00:19:35.540 [2024-07-15 11:38:12.758430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.540 [2024-07-15 11:38:12.758445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:35.540 [2024-07-15 11:38:12.762188] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15bf380) 00:19:35.540 [2024-07-15 11:38:12.762237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.540 [2024-07-15 11:38:12.762251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:35.540 [2024-07-15 11:38:12.766090] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15bf380) 00:19:35.540 [2024-07-15 11:38:12.766140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.540 [2024-07-15 11:38:12.766155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:35.540 [2024-07-15 11:38:12.771396] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15bf380) 00:19:35.540 [2024-07-15 11:38:12.771455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.541 [2024-07-15 11:38:12.771469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:35.541 [2024-07-15 11:38:12.776016] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15bf380) 00:19:35.541 [2024-07-15 11:38:12.776066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.541 [2024-07-15 11:38:12.776080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:35.541 [2024-07-15 11:38:12.779532] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15bf380) 00:19:35.541 [2024-07-15 11:38:12.779592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.541 [2024-07-15 11:38:12.779607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:35.541 [2024-07-15 11:38:12.784670] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15bf380) 00:19:35.541 [2024-07-15 11:38:12.784720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.541 [2024-07-15 11:38:12.784735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:35.541 [2024-07-15 11:38:12.789123] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15bf380) 00:19:35.541 [2024-07-15 11:38:12.789173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.541 [2024-07-15 11:38:12.789189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:35.541 [2024-07-15 11:38:12.792106] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15bf380) 00:19:35.541 [2024-07-15 11:38:12.792154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.541 [2024-07-15 11:38:12.792169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:35.541 [2024-07-15 11:38:12.797993] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15bf380) 00:19:35.541 [2024-07-15 11:38:12.798067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.541 [2024-07-15 11:38:12.798083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:35.541 [2024-07-15 11:38:12.803886] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15bf380) 00:19:35.541 [2024-07-15 11:38:12.803959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.541 [2024-07-15 11:38:12.803975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:35.541 [2024-07-15 11:38:12.807671] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15bf380) 00:19:35.541 [2024-07-15 11:38:12.807723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.541 [2024-07-15 11:38:12.807739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:35.541 [2024-07-15 11:38:12.812419] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15bf380) 00:19:35.541 [2024-07-15 11:38:12.812481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.541 [2024-07-15 11:38:12.812496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:35.541 [2024-07-15 11:38:12.816741] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15bf380) 00:19:35.541 [2024-07-15 11:38:12.816797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.541 [2024-07-15 11:38:12.816811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:35.541 [2024-07-15 11:38:12.821338] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15bf380) 00:19:35.541 [2024-07-15 11:38:12.821399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.541 [2024-07-15 11:38:12.821414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:35.541 [2024-07-15 11:38:12.825390] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15bf380) 00:19:35.541 [2024-07-15 11:38:12.825459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.541 [2024-07-15 11:38:12.825475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:35.541 [2024-07-15 11:38:12.830490] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15bf380) 00:19:35.541 [2024-07-15 11:38:12.830580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.541 [2024-07-15 11:38:12.830596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:35.541 [2024-07-15 11:38:12.834775] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15bf380) 00:19:35.541 [2024-07-15 11:38:12.834847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.541 [2024-07-15 11:38:12.834863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:35.541 [2024-07-15 11:38:12.838709] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15bf380) 00:19:35.541 [2024-07-15 11:38:12.838787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.541 [2024-07-15 11:38:12.838803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:35.541 [2024-07-15 11:38:12.843559] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15bf380) 00:19:35.541 [2024-07-15 11:38:12.843612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.541 [2024-07-15 11:38:12.843627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:35.541 [2024-07-15 11:38:12.847815] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15bf380) 00:19:35.541 [2024-07-15 11:38:12.847866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.541 [2024-07-15 11:38:12.847881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:35.541 [2024-07-15 11:38:12.852661] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15bf380) 00:19:35.541 [2024-07-15 11:38:12.852710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.541 [2024-07-15 11:38:12.852725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:35.541 [2024-07-15 11:38:12.857062] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15bf380) 00:19:35.541 [2024-07-15 11:38:12.857114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.541 [2024-07-15 11:38:12.857128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:35.541 [2024-07-15 11:38:12.861180] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15bf380) 00:19:35.541 [2024-07-15 11:38:12.861230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.541 [2024-07-15 11:38:12.861245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:35.541 [2024-07-15 11:38:12.866352] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15bf380) 00:19:35.541 [2024-07-15 11:38:12.866401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.541 [2024-07-15 11:38:12.866416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:35.541 [2024-07-15 11:38:12.870164] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15bf380) 00:19:35.541 [2024-07-15 11:38:12.870211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.541 [2024-07-15 11:38:12.870226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:35.541 [2024-07-15 11:38:12.874206] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15bf380) 00:19:35.541 [2024-07-15 11:38:12.874258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.541 [2024-07-15 11:38:12.874272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:35.541 [2024-07-15 11:38:12.878916] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15bf380) 00:19:35.541 [2024-07-15 11:38:12.878973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.541 [2024-07-15 11:38:12.878988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:35.541 [2024-07-15 11:38:12.882985] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15bf380) 00:19:35.541 [2024-07-15 11:38:12.883033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.541 [2024-07-15 11:38:12.883047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:35.541 [2024-07-15 11:38:12.887877] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15bf380) 00:19:35.541 [2024-07-15 11:38:12.887930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.541 [2024-07-15 11:38:12.887946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:35.541 [2024-07-15 11:38:12.892289] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15bf380) 00:19:35.541 [2024-07-15 11:38:12.892339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.541 [2024-07-15 11:38:12.892353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:35.541 [2024-07-15 11:38:12.896224] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15bf380) 00:19:35.541 [2024-07-15 11:38:12.896274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.541 [2024-07-15 11:38:12.896289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:35.541 [2024-07-15 11:38:12.900408] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15bf380) 00:19:35.541 [2024-07-15 11:38:12.900458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.541 [2024-07-15 11:38:12.900472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:35.541 [2024-07-15 11:38:12.904533] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15bf380) 00:19:35.541 [2024-07-15 11:38:12.904595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.541 [2024-07-15 11:38:12.904609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:35.541 [2024-07-15 11:38:12.908129] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15bf380) 00:19:35.541 [2024-07-15 11:38:12.908176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.541 [2024-07-15 11:38:12.908190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:35.541 [2024-07-15 11:38:12.912941] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15bf380) 00:19:35.541 [2024-07-15 11:38:12.912991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.541 [2024-07-15 11:38:12.913005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:35.541 [2024-07-15 11:38:12.917451] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15bf380) 00:19:35.541 [2024-07-15 11:38:12.917523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.541 [2024-07-15 11:38:12.917538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:35.541 [2024-07-15 11:38:12.921459] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15bf380) 00:19:35.541 [2024-07-15 11:38:12.921533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.541 [2024-07-15 11:38:12.921561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:35.541 [2024-07-15 11:38:12.925858] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15bf380) 00:19:35.541 [2024-07-15 11:38:12.925924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.541 [2024-07-15 11:38:12.925939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:35.541 [2024-07-15 11:38:12.930302] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15bf380) 00:19:35.541 [2024-07-15 11:38:12.930354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.541 [2024-07-15 11:38:12.930369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:35.541 [2024-07-15 11:38:12.933971] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15bf380) 00:19:35.541 [2024-07-15 11:38:12.934021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.541 [2024-07-15 11:38:12.934036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:35.541 [2024-07-15 11:38:12.938713] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15bf380) 00:19:35.541 [2024-07-15 11:38:12.938762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.541 [2024-07-15 11:38:12.938777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:35.541 [2024-07-15 11:38:12.943213] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15bf380) 00:19:35.541 [2024-07-15 11:38:12.943265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.541 [2024-07-15 11:38:12.943280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:35.541 [2024-07-15 11:38:12.946775] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15bf380) 00:19:35.541 [2024-07-15 11:38:12.946821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.541 [2024-07-15 11:38:12.946835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:35.541 [2024-07-15 11:38:12.951450] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15bf380) 00:19:35.541 [2024-07-15 11:38:12.951500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.541 [2024-07-15 11:38:12.951515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:35.541 [2024-07-15 11:38:12.955820] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15bf380) 00:19:35.541 [2024-07-15 11:38:12.955869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.541 [2024-07-15 11:38:12.955884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:35.541 [2024-07-15 11:38:12.959152] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15bf380) 00:19:35.541 [2024-07-15 11:38:12.959197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.541 [2024-07-15 11:38:12.959211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:35.541 [2024-07-15 11:38:12.963126] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15bf380) 00:19:35.541 [2024-07-15 11:38:12.963172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.541 [2024-07-15 11:38:12.963186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:35.541 [2024-07-15 11:38:12.967824] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15bf380) 00:19:35.541 [2024-07-15 11:38:12.967879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.541 [2024-07-15 11:38:12.967893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:35.541 [2024-07-15 11:38:12.971126] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15bf380) 00:19:35.541 [2024-07-15 11:38:12.971179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.541 [2024-07-15 11:38:12.971193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:35.541 [2024-07-15 11:38:12.975565] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15bf380) 00:19:35.541 [2024-07-15 11:38:12.975616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.541 [2024-07-15 11:38:12.975631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:35.541 [2024-07-15 11:38:12.979798] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15bf380) 00:19:35.541 [2024-07-15 11:38:12.979853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.541 [2024-07-15 11:38:12.979868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:35.541 [2024-07-15 11:38:12.983557] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15bf380) 00:19:35.541 [2024-07-15 11:38:12.983604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.541 [2024-07-15 11:38:12.983618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:35.541 [2024-07-15 11:38:12.988448] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15bf380) 00:19:35.541 [2024-07-15 11:38:12.988496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.541 [2024-07-15 11:38:12.988512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:35.541 [2024-07-15 11:38:12.994007] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15bf380) 00:19:35.541 [2024-07-15 11:38:12.994059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.542 [2024-07-15 11:38:12.994074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:35.542 [2024-07-15 11:38:12.997474] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15bf380) 00:19:35.542 [2024-07-15 11:38:12.997518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.542 [2024-07-15 11:38:12.997532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:35.542 [2024-07-15 11:38:13.001500] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15bf380) 00:19:35.542 [2024-07-15 11:38:13.001560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.542 [2024-07-15 11:38:13.001576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:35.542 [2024-07-15 11:38:13.006025] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15bf380) 00:19:35.542 [2024-07-15 11:38:13.006094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.542 [2024-07-15 11:38:13.006109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:35.542 [2024-07-15 11:38:13.010727] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15bf380) 00:19:35.542 [2024-07-15 11:38:13.010776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.542 [2024-07-15 11:38:13.010791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:35.542 [2024-07-15 11:38:13.014778] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15bf380) 00:19:35.542 [2024-07-15 11:38:13.014824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.542 [2024-07-15 11:38:13.014838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:35.799 [2024-07-15 11:38:13.018813] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15bf380) 00:19:35.799 [2024-07-15 11:38:13.018858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.799 [2024-07-15 11:38:13.018873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:35.799 [2024-07-15 11:38:13.023101] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15bf380) 00:19:35.799 [2024-07-15 11:38:13.023147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.799 [2024-07-15 11:38:13.023161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:35.799 [2024-07-15 11:38:13.027940] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15bf380) 00:19:35.799 [2024-07-15 11:38:13.027988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.799 [2024-07-15 11:38:13.028003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:35.799 [2024-07-15 11:38:13.031338] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15bf380) 00:19:35.799 [2024-07-15 11:38:13.031382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.799 [2024-07-15 11:38:13.031395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:35.799 [2024-07-15 11:38:13.035865] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15bf380) 00:19:35.799 [2024-07-15 11:38:13.035918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.799 [2024-07-15 11:38:13.035933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:35.799 [2024-07-15 11:38:13.041031] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15bf380) 00:19:35.799 [2024-07-15 11:38:13.041121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.799 [2024-07-15 11:38:13.041136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:35.799 [2024-07-15 11:38:13.045972] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15bf380) 00:19:35.799 [2024-07-15 11:38:13.046059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.799 [2024-07-15 11:38:13.046074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:35.799 [2024-07-15 11:38:13.050151] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15bf380) 00:19:35.799 [2024-07-15 11:38:13.050217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.799 [2024-07-15 11:38:13.050232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:35.799 [2024-07-15 11:38:13.054342] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15bf380) 00:19:35.799 [2024-07-15 11:38:13.054397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.799 [2024-07-15 11:38:13.054411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:35.799 [2024-07-15 11:38:13.059452] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15bf380) 00:19:35.799 [2024-07-15 11:38:13.059512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.799 [2024-07-15 11:38:13.059527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:35.799 [2024-07-15 11:38:13.064770] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15bf380) 00:19:35.799 [2024-07-15 11:38:13.064832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.799 [2024-07-15 11:38:13.064846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:35.799 [2024-07-15 11:38:13.067690] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15bf380) 00:19:35.799 [2024-07-15 11:38:13.067747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.799 [2024-07-15 11:38:13.067761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:35.799 [2024-07-15 11:38:13.073260] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15bf380) 00:19:35.799 [2024-07-15 11:38:13.073336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.799 [2024-07-15 11:38:13.073351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:35.799 [2024-07-15 11:38:13.076724] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15bf380) 00:19:35.799 [2024-07-15 11:38:13.076771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.799 [2024-07-15 11:38:13.076786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:35.799 [2024-07-15 11:38:13.081134] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15bf380) 00:19:35.799 [2024-07-15 11:38:13.081185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.799 [2024-07-15 11:38:13.081199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:35.799 [2024-07-15 11:38:13.085890] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15bf380) 00:19:35.799 [2024-07-15 11:38:13.085952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.799 [2024-07-15 11:38:13.085968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:35.799 [2024-07-15 11:38:13.089497] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15bf380) 00:19:35.799 [2024-07-15 11:38:13.089565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.799 [2024-07-15 11:38:13.089581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:35.799 [2024-07-15 11:38:13.093938] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15bf380) 00:19:35.799 [2024-07-15 11:38:13.093989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.799 [2024-07-15 11:38:13.094004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:35.799 [2024-07-15 11:38:13.098031] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15bf380) 00:19:35.799 [2024-07-15 11:38:13.098091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.799 [2024-07-15 11:38:13.098106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:35.799 [2024-07-15 11:38:13.102659] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15bf380) 00:19:35.799 [2024-07-15 11:38:13.102730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.799 [2024-07-15 11:38:13.102745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:35.799 [2024-07-15 11:38:13.107441] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15bf380) 00:19:35.799 [2024-07-15 11:38:13.107514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.799 [2024-07-15 11:38:13.107528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:35.799 [2024-07-15 11:38:13.111297] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15bf380) 00:19:35.799 [2024-07-15 11:38:13.111366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.799 [2024-07-15 11:38:13.111381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:35.799 [2024-07-15 11:38:13.115695] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15bf380) 00:19:35.799 [2024-07-15 11:38:13.115764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.799 [2024-07-15 11:38:13.115779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:35.799 [2024-07-15 11:38:13.120047] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15bf380) 00:19:35.799 [2024-07-15 11:38:13.120116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.799 [2024-07-15 11:38:13.120131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:35.799 [2024-07-15 11:38:13.124221] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15bf380) 00:19:35.799 [2024-07-15 11:38:13.124279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.799 [2024-07-15 11:38:13.124294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:35.799 [2024-07-15 11:38:13.128974] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15bf380) 00:19:35.799 [2024-07-15 11:38:13.129041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.799 [2024-07-15 11:38:13.129055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:35.799 [2024-07-15 11:38:13.133104] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15bf380) 00:19:35.799 [2024-07-15 11:38:13.133171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.799 [2024-07-15 11:38:13.133186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:35.799 [2024-07-15 11:38:13.138152] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15bf380) 00:19:35.799 [2024-07-15 11:38:13.138221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.799 [2024-07-15 11:38:13.138236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:35.799 [2024-07-15 11:38:13.143538] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15bf380) 00:19:35.799 [2024-07-15 11:38:13.143606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.799 [2024-07-15 11:38:13.143621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:35.799 [2024-07-15 11:38:13.146978] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15bf380) 00:19:35.799 [2024-07-15 11:38:13.147027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.799 [2024-07-15 11:38:13.147041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:35.799 [2024-07-15 11:38:13.152203] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15bf380) 00:19:35.799 [2024-07-15 11:38:13.152259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.799 [2024-07-15 11:38:13.152275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:35.799 [2024-07-15 11:38:13.156075] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15bf380) 00:19:35.799 [2024-07-15 11:38:13.156150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.799 [2024-07-15 11:38:13.156165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:35.799 [2024-07-15 11:38:13.159790] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15bf380) 00:19:35.799 [2024-07-15 11:38:13.159858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.799 [2024-07-15 11:38:13.159873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:35.799 [2024-07-15 11:38:13.163969] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15bf380) 00:19:35.799 [2024-07-15 11:38:13.164038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.799 [2024-07-15 11:38:13.164053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:35.799 [2024-07-15 11:38:13.168802] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15bf380) 00:19:35.799 [2024-07-15 11:38:13.168877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.799 [2024-07-15 11:38:13.168892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:35.800 [2024-07-15 11:38:13.173631] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15bf380) 00:19:35.800 [2024-07-15 11:38:13.173705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.800 [2024-07-15 11:38:13.173721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:35.800 [2024-07-15 11:38:13.177576] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15bf380) 00:19:35.800 [2024-07-15 11:38:13.177643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.800 [2024-07-15 11:38:13.177658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:35.800 [2024-07-15 11:38:13.182857] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15bf380) 00:19:35.800 [2024-07-15 11:38:13.182933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.800 [2024-07-15 11:38:13.182949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:35.800 [2024-07-15 11:38:13.188767] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15bf380) 00:19:35.800 [2024-07-15 11:38:13.188856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.800 [2024-07-15 11:38:13.188873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:35.800 [2024-07-15 11:38:13.192418] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15bf380) 00:19:35.800 [2024-07-15 11:38:13.192486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.800 [2024-07-15 11:38:13.192501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:35.800 [2024-07-15 11:38:13.197431] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15bf380) 00:19:35.800 [2024-07-15 11:38:13.197512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.800 [2024-07-15 11:38:13.197528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:35.800 [2024-07-15 11:38:13.203450] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15bf380) 00:19:35.800 [2024-07-15 11:38:13.203528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.800 [2024-07-15 11:38:13.203557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:35.800 [2024-07-15 11:38:13.209332] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15bf380) 00:19:35.800 [2024-07-15 11:38:13.209411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.800 [2024-07-15 11:38:13.209426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:35.800 [2024-07-15 11:38:13.213609] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15bf380) 00:19:35.800 [2024-07-15 11:38:13.213681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.800 [2024-07-15 11:38:13.213696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:35.800 [2024-07-15 11:38:13.218222] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15bf380) 00:19:35.800 [2024-07-15 11:38:13.218271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.800 [2024-07-15 11:38:13.218286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:35.800 [2024-07-15 11:38:13.222440] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15bf380) 00:19:35.800 [2024-07-15 11:38:13.222484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.800 [2024-07-15 11:38:13.222499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:35.800 [2024-07-15 11:38:13.227164] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15bf380) 00:19:35.800 [2024-07-15 11:38:13.227212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.800 [2024-07-15 11:38:13.227227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:35.800 [2024-07-15 11:38:13.232779] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15bf380) 00:19:35.800 [2024-07-15 11:38:13.232833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.800 [2024-07-15 11:38:13.232848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:35.800 [2024-07-15 11:38:13.237930] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15bf380) 00:19:35.800 [2024-07-15 11:38:13.237985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.800 [2024-07-15 11:38:13.238001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:35.800 [2024-07-15 11:38:13.241114] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15bf380) 00:19:35.800 [2024-07-15 11:38:13.241160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.800 [2024-07-15 11:38:13.241175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:35.800 [2024-07-15 11:38:13.245622] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15bf380) 00:19:35.800 [2024-07-15 11:38:13.245671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.800 [2024-07-15 11:38:13.245686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:35.800 [2024-07-15 11:38:13.249971] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15bf380) 00:19:35.800 [2024-07-15 11:38:13.250022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.800 [2024-07-15 11:38:13.250037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:35.800 [2024-07-15 11:38:13.254422] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15bf380) 00:19:35.800 [2024-07-15 11:38:13.254475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.800 [2024-07-15 11:38:13.254490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:35.800 [2024-07-15 11:38:13.258312] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15bf380) 00:19:35.800 [2024-07-15 11:38:13.258365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.800 [2024-07-15 11:38:13.258380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:35.800 [2024-07-15 11:38:13.263374] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15bf380) 00:19:35.800 [2024-07-15 11:38:13.263448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.800 [2024-07-15 11:38:13.263464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:35.800 [2024-07-15 11:38:13.266854] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15bf380) 00:19:35.800 [2024-07-15 11:38:13.266916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.800 [2024-07-15 11:38:13.266931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:35.800 [2024-07-15 11:38:13.271762] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15bf380) 00:19:35.800 [2024-07-15 11:38:13.271828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.800 [2024-07-15 11:38:13.271843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:36.060 [2024-07-15 11:38:13.276327] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15bf380) 00:19:36.060 [2024-07-15 11:38:13.276380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.060 [2024-07-15 11:38:13.276396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:36.060 [2024-07-15 11:38:13.280660] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15bf380) 00:19:36.060 [2024-07-15 11:38:13.280719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.060 [2024-07-15 11:38:13.280735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:36.060 [2024-07-15 11:38:13.284953] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15bf380) 00:19:36.060 [2024-07-15 11:38:13.285000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.060 [2024-07-15 11:38:13.285015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:36.060 [2024-07-15 11:38:13.289190] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15bf380) 00:19:36.060 [2024-07-15 11:38:13.289241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.060 [2024-07-15 11:38:13.289256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:36.060 [2024-07-15 11:38:13.294230] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15bf380) 00:19:36.060 [2024-07-15 11:38:13.294288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.060 [2024-07-15 11:38:13.294303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:36.060 [2024-07-15 11:38:13.298314] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15bf380) 00:19:36.060 [2024-07-15 11:38:13.298388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.060 [2024-07-15 11:38:13.298403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:36.060 [2024-07-15 11:38:13.303431] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15bf380) 00:19:36.060 [2024-07-15 11:38:13.303507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.060 [2024-07-15 11:38:13.303523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:36.060 [2024-07-15 11:38:13.307004] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15bf380) 00:19:36.060 [2024-07-15 11:38:13.307049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.060 [2024-07-15 11:38:13.307065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:36.060 [2024-07-15 11:38:13.311685] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15bf380) 00:19:36.060 [2024-07-15 11:38:13.311734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.060 [2024-07-15 11:38:13.311749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:36.060 [2024-07-15 11:38:13.315458] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15bf380) 00:19:36.060 [2024-07-15 11:38:13.315502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.060 [2024-07-15 11:38:13.315516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:36.060 [2024-07-15 11:38:13.320217] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15bf380) 00:19:36.060 [2024-07-15 11:38:13.320264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.060 [2024-07-15 11:38:13.320278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:36.060 [2024-07-15 11:38:13.324201] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15bf380) 00:19:36.060 [2024-07-15 11:38:13.324247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.060 [2024-07-15 11:38:13.324263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:36.060 [2024-07-15 11:38:13.328872] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15bf380) 00:19:36.060 [2024-07-15 11:38:13.328920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.060 [2024-07-15 11:38:13.328935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:36.060 [2024-07-15 11:38:13.333689] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15bf380) 00:19:36.060 [2024-07-15 11:38:13.333736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.060 [2024-07-15 11:38:13.333751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:36.060 [2024-07-15 11:38:13.337671] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15bf380) 00:19:36.060 [2024-07-15 11:38:13.337715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.060 [2024-07-15 11:38:13.337729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:36.060 [2024-07-15 11:38:13.341666] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15bf380) 00:19:36.060 [2024-07-15 11:38:13.341713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.061 [2024-07-15 11:38:13.341728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:36.061 [2024-07-15 11:38:13.345836] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15bf380) 00:19:36.061 [2024-07-15 11:38:13.345892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.061 [2024-07-15 11:38:13.345906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:36.061 [2024-07-15 11:38:13.349692] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15bf380) 00:19:36.061 [2024-07-15 11:38:13.349741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.061 [2024-07-15 11:38:13.349755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:36.061 [2024-07-15 11:38:13.353913] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15bf380) 00:19:36.061 [2024-07-15 11:38:13.353961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.061 [2024-07-15 11:38:13.353976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:36.061 [2024-07-15 11:38:13.357888] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15bf380) 00:19:36.061 [2024-07-15 11:38:13.357934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.061 [2024-07-15 11:38:13.357948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:36.061 [2024-07-15 11:38:13.361992] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15bf380) 00:19:36.061 [2024-07-15 11:38:13.362042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.061 [2024-07-15 11:38:13.362057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:36.061 [2024-07-15 11:38:13.365936] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15bf380) 00:19:36.061 [2024-07-15 11:38:13.365987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.061 [2024-07-15 11:38:13.366001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:36.061 [2024-07-15 11:38:13.370169] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15bf380) 00:19:36.061 [2024-07-15 11:38:13.370220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.061 [2024-07-15 11:38:13.370235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:36.061 [2024-07-15 11:38:13.374353] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15bf380) 00:19:36.061 [2024-07-15 11:38:13.374404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.061 [2024-07-15 11:38:13.374419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:36.061 [2024-07-15 11:38:13.379597] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15bf380) 00:19:36.061 [2024-07-15 11:38:13.379646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.061 [2024-07-15 11:38:13.379661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:36.061 [2024-07-15 11:38:13.382501] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15bf380) 00:19:36.061 [2024-07-15 11:38:13.382542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.061 [2024-07-15 11:38:13.382570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:36.061 [2024-07-15 11:38:13.387845] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15bf380) 00:19:36.061 [2024-07-15 11:38:13.387899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.061 [2024-07-15 11:38:13.387913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:36.061 [2024-07-15 11:38:13.392337] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15bf380) 00:19:36.061 [2024-07-15 11:38:13.392386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.061 [2024-07-15 11:38:13.392401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:36.061 [2024-07-15 11:38:13.396085] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15bf380) 00:19:36.061 [2024-07-15 11:38:13.396133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.061 [2024-07-15 11:38:13.396148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:36.061 [2024-07-15 11:38:13.400775] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15bf380) 00:19:36.061 [2024-07-15 11:38:13.400821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.061 [2024-07-15 11:38:13.400836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:36.061 [2024-07-15 11:38:13.405278] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15bf380) 00:19:36.061 [2024-07-15 11:38:13.405327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.061 [2024-07-15 11:38:13.405342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:36.061 [2024-07-15 11:38:13.409399] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15bf380) 00:19:36.061 [2024-07-15 11:38:13.409446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.061 [2024-07-15 11:38:13.409460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:36.061 [2024-07-15 11:38:13.413727] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15bf380) 00:19:36.061 [2024-07-15 11:38:13.413773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.061 [2024-07-15 11:38:13.413787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:36.061 [2024-07-15 11:38:13.417823] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15bf380) 00:19:36.061 [2024-07-15 11:38:13.417869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.061 [2024-07-15 11:38:13.417893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:36.061 [2024-07-15 11:38:13.421698] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15bf380) 00:19:36.061 [2024-07-15 11:38:13.421745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.061 [2024-07-15 11:38:13.421760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:36.061 [2024-07-15 11:38:13.426739] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15bf380) 00:19:36.061 [2024-07-15 11:38:13.426787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.061 [2024-07-15 11:38:13.426802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:36.061 [2024-07-15 11:38:13.432259] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15bf380) 00:19:36.061 [2024-07-15 11:38:13.432311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.061 [2024-07-15 11:38:13.432326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:36.061 [2024-07-15 11:38:13.437454] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15bf380) 00:19:36.061 [2024-07-15 11:38:13.437517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.061 [2024-07-15 11:38:13.437532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:36.061 [2024-07-15 11:38:13.441276] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15bf380) 00:19:36.061 [2024-07-15 11:38:13.441330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.061 [2024-07-15 11:38:13.441344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:36.061 [2024-07-15 11:38:13.445843] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15bf380) 00:19:36.061 [2024-07-15 11:38:13.445905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.061 [2024-07-15 11:38:13.445920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:36.061 [2024-07-15 11:38:13.450215] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15bf380) 00:19:36.061 [2024-07-15 11:38:13.450263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.061 [2024-07-15 11:38:13.450277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:36.061 [2024-07-15 11:38:13.454746] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15bf380) 00:19:36.061 [2024-07-15 11:38:13.454799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.061 [2024-07-15 11:38:13.454813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:36.061 [2024-07-15 11:38:13.458645] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15bf380) 00:19:36.061 [2024-07-15 11:38:13.458695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.061 [2024-07-15 11:38:13.458710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:36.061 [2024-07-15 11:38:13.462713] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15bf380) 00:19:36.061 [2024-07-15 11:38:13.462764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.061 [2024-07-15 11:38:13.462779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:36.061 [2024-07-15 11:38:13.467628] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15bf380) 00:19:36.061 [2024-07-15 11:38:13.467697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.062 [2024-07-15 11:38:13.467721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:36.062 [2024-07-15 11:38:13.472114] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15bf380) 00:19:36.062 [2024-07-15 11:38:13.472162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.062 [2024-07-15 11:38:13.472177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:36.062 [2024-07-15 11:38:13.477380] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15bf380) 00:19:36.062 [2024-07-15 11:38:13.477433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.062 [2024-07-15 11:38:13.477449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:36.062 [2024-07-15 11:38:13.480445] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15bf380) 00:19:36.062 [2024-07-15 11:38:13.480488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.062 [2024-07-15 11:38:13.480502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:36.062 [2024-07-15 11:38:13.485019] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15bf380) 00:19:36.062 [2024-07-15 11:38:13.485073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.062 [2024-07-15 11:38:13.485088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:36.062 [2024-07-15 11:38:13.489815] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15bf380) 00:19:36.062 [2024-07-15 11:38:13.489867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.062 [2024-07-15 11:38:13.489894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:36.062 [2024-07-15 11:38:13.494296] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15bf380) 00:19:36.062 [2024-07-15 11:38:13.494344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.062 [2024-07-15 11:38:13.494359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:36.062 [2024-07-15 11:38:13.498308] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15bf380) 00:19:36.062 [2024-07-15 11:38:13.498353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.062 [2024-07-15 11:38:13.498368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:36.062 [2024-07-15 11:38:13.503144] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15bf380) 00:19:36.062 [2024-07-15 11:38:13.503196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.062 [2024-07-15 11:38:13.503211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:36.062 [2024-07-15 11:38:13.507099] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15bf380) 00:19:36.062 [2024-07-15 11:38:13.507166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.062 [2024-07-15 11:38:13.507182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:36.062 [2024-07-15 11:38:13.511752] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15bf380) 00:19:36.062 [2024-07-15 11:38:13.511818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.062 [2024-07-15 11:38:13.511833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:36.062 [2024-07-15 11:38:13.516153] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15bf380) 00:19:36.062 [2024-07-15 11:38:13.516205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.062 [2024-07-15 11:38:13.516221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:36.062 [2024-07-15 11:38:13.520015] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15bf380) 00:19:36.062 [2024-07-15 11:38:13.520064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.062 [2024-07-15 11:38:13.520078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:36.062 [2024-07-15 11:38:13.524492] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15bf380) 00:19:36.062 [2024-07-15 11:38:13.524541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.062 [2024-07-15 11:38:13.524571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:36.062 [2024-07-15 11:38:13.528445] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15bf380) 00:19:36.062 [2024-07-15 11:38:13.528493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.062 [2024-07-15 11:38:13.528507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:36.062 [2024-07-15 11:38:13.532774] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15bf380) 00:19:36.062 [2024-07-15 11:38:13.532828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.062 [2024-07-15 11:38:13.532843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:36.321 [2024-07-15 11:38:13.537949] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15bf380) 00:19:36.321 [2024-07-15 11:38:13.538004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.321 [2024-07-15 11:38:13.538025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:36.321 [2024-07-15 11:38:13.542111] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15bf380) 00:19:36.321 [2024-07-15 11:38:13.542167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.321 [2024-07-15 11:38:13.542183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:36.321 [2024-07-15 11:38:13.546101] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15bf380) 00:19:36.321 [2024-07-15 11:38:13.546154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.321 [2024-07-15 11:38:13.546168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:36.321 [2024-07-15 11:38:13.550596] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15bf380) 00:19:36.321 [2024-07-15 11:38:13.550663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.321 [2024-07-15 11:38:13.550678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:36.321 [2024-07-15 11:38:13.554007] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15bf380) 00:19:36.321 [2024-07-15 11:38:13.554060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.321 [2024-07-15 11:38:13.554075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:36.321 [2024-07-15 11:38:13.558419] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15bf380) 00:19:36.321 [2024-07-15 11:38:13.558467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.321 [2024-07-15 11:38:13.558482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:36.321 [2024-07-15 11:38:13.563277] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15bf380) 00:19:36.321 [2024-07-15 11:38:13.563328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.321 [2024-07-15 11:38:13.563343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:36.321 [2024-07-15 11:38:13.566639] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15bf380) 00:19:36.321 [2024-07-15 11:38:13.566684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.321 [2024-07-15 11:38:13.566699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:36.321 [2024-07-15 11:38:13.571117] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15bf380) 00:19:36.321 [2024-07-15 11:38:13.571164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.321 [2024-07-15 11:38:13.571179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:36.321 [2024-07-15 11:38:13.575837] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15bf380) 00:19:36.321 [2024-07-15 11:38:13.575884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.321 [2024-07-15 11:38:13.575899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:36.321 [2024-07-15 11:38:13.579360] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15bf380) 00:19:36.321 [2024-07-15 11:38:13.579406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.321 [2024-07-15 11:38:13.579421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:36.322 [2024-07-15 11:38:13.584040] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15bf380) 00:19:36.322 [2024-07-15 11:38:13.584088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.322 [2024-07-15 11:38:13.584104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:36.322 [2024-07-15 11:38:13.588475] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15bf380) 00:19:36.322 [2024-07-15 11:38:13.588536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.322 [2024-07-15 11:38:13.588567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:36.322 [2024-07-15 11:38:13.592510] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15bf380) 00:19:36.322 [2024-07-15 11:38:13.592589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.322 [2024-07-15 11:38:13.592605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:36.322 [2024-07-15 11:38:13.597051] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15bf380) 00:19:36.322 [2024-07-15 11:38:13.597133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.322 [2024-07-15 11:38:13.597150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:36.322 [2024-07-15 11:38:13.601247] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15bf380) 00:19:36.322 [2024-07-15 11:38:13.601315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.322 [2024-07-15 11:38:13.601330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:36.322 [2024-07-15 11:38:13.605434] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15bf380) 00:19:36.322 [2024-07-15 11:38:13.605502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.322 [2024-07-15 11:38:13.605517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:36.322 [2024-07-15 11:38:13.610154] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15bf380) 00:19:36.322 [2024-07-15 11:38:13.610222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.322 [2024-07-15 11:38:13.610238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:36.322 [2024-07-15 11:38:13.614185] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15bf380) 00:19:36.322 [2024-07-15 11:38:13.614242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.322 [2024-07-15 11:38:13.614257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:36.322 [2024-07-15 11:38:13.618086] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15bf380) 00:19:36.322 [2024-07-15 11:38:13.618137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.322 [2024-07-15 11:38:13.618153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:36.322 [2024-07-15 11:38:13.622865] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15bf380) 00:19:36.322 [2024-07-15 11:38:13.622927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.322 [2024-07-15 11:38:13.622943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:36.322 [2024-07-15 11:38:13.627121] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15bf380) 00:19:36.322 [2024-07-15 11:38:13.627186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.322 [2024-07-15 11:38:13.627201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:36.322 [2024-07-15 11:38:13.631989] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15bf380) 00:19:36.322 [2024-07-15 11:38:13.632042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.322 [2024-07-15 11:38:13.632057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:36.322 [2024-07-15 11:38:13.636516] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15bf380) 00:19:36.322 [2024-07-15 11:38:13.636586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.322 [2024-07-15 11:38:13.636602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:36.322 [2024-07-15 11:38:13.640794] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15bf380) 00:19:36.322 [2024-07-15 11:38:13.640843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.322 [2024-07-15 11:38:13.640858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:36.322 [2024-07-15 11:38:13.645366] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15bf380) 00:19:36.322 [2024-07-15 11:38:13.645431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.322 [2024-07-15 11:38:13.645446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:36.322 [2024-07-15 11:38:13.649649] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15bf380) 00:19:36.322 [2024-07-15 11:38:13.649721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.322 [2024-07-15 11:38:13.649738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:36.322 [2024-07-15 11:38:13.653768] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15bf380) 00:19:36.322 [2024-07-15 11:38:13.653841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.322 [2024-07-15 11:38:13.653856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:36.322 [2024-07-15 11:38:13.658667] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15bf380) 00:19:36.322 [2024-07-15 11:38:13.658740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.322 [2024-07-15 11:38:13.658756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:36.322 [2024-07-15 11:38:13.663844] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15bf380) 00:19:36.322 [2024-07-15 11:38:13.663920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.322 [2024-07-15 11:38:13.663935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:36.322 [2024-07-15 11:38:13.666957] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15bf380) 00:19:36.322 [2024-07-15 11:38:13.667023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.322 [2024-07-15 11:38:13.667039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:36.322 [2024-07-15 11:38:13.672369] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15bf380) 00:19:36.322 [2024-07-15 11:38:13.672443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.322 [2024-07-15 11:38:13.672458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:36.322 [2024-07-15 11:38:13.677000] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15bf380) 00:19:36.322 [2024-07-15 11:38:13.677069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.322 [2024-07-15 11:38:13.677084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:36.322 [2024-07-15 11:38:13.680585] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15bf380) 00:19:36.322 [2024-07-15 11:38:13.680660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.322 [2024-07-15 11:38:13.680675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:36.322 [2024-07-15 11:38:13.685507] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15bf380) 00:19:36.322 [2024-07-15 11:38:13.685594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.322 [2024-07-15 11:38:13.685610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:36.322 [2024-07-15 11:38:13.690372] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15bf380) 00:19:36.322 [2024-07-15 11:38:13.690447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.322 [2024-07-15 11:38:13.690463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:36.322 [2024-07-15 11:38:13.694147] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15bf380) 00:19:36.322 [2024-07-15 11:38:13.694224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.322 [2024-07-15 11:38:13.694239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:36.322 [2024-07-15 11:38:13.698972] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15bf380) 00:19:36.322 [2024-07-15 11:38:13.699042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.322 [2024-07-15 11:38:13.699057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:36.322 [2024-07-15 11:38:13.704564] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15bf380) 00:19:36.322 [2024-07-15 11:38:13.704637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.322 [2024-07-15 11:38:13.704653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:36.322 [2024-07-15 11:38:13.709810] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15bf380) 00:19:36.322 [2024-07-15 11:38:13.709866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.322 [2024-07-15 11:38:13.709892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:36.322 [2024-07-15 11:38:13.712658] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15bf380) 00:19:36.322 [2024-07-15 11:38:13.712700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.322 [2024-07-15 11:38:13.712713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:36.322 [2024-07-15 11:38:13.718038] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15bf380) 00:19:36.322 [2024-07-15 11:38:13.718087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.322 [2024-07-15 11:38:13.718102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:36.322 [2024-07-15 11:38:13.723430] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15bf380) 00:19:36.322 [2024-07-15 11:38:13.723488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.322 [2024-07-15 11:38:13.723504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:36.322 [2024-07-15 11:38:13.727275] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15bf380) 00:19:36.322 [2024-07-15 11:38:13.727327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.322 [2024-07-15 11:38:13.727341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:36.322 [2024-07-15 11:38:13.731353] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15bf380) 00:19:36.322 [2024-07-15 11:38:13.731402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.322 [2024-07-15 11:38:13.731416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:36.322 [2024-07-15 11:38:13.735628] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15bf380) 00:19:36.322 [2024-07-15 11:38:13.735681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.322 [2024-07-15 11:38:13.735695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:36.322 [2024-07-15 11:38:13.739928] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15bf380) 00:19:36.322 [2024-07-15 11:38:13.739993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.322 [2024-07-15 11:38:13.740010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:36.322 [2024-07-15 11:38:13.743858] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15bf380) 00:19:36.322 [2024-07-15 11:38:13.743912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.322 [2024-07-15 11:38:13.743927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:36.322 [2024-07-15 11:38:13.748461] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15bf380) 00:19:36.322 [2024-07-15 11:38:13.748518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.322 [2024-07-15 11:38:13.748532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:36.322 [2024-07-15 11:38:13.752483] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15bf380) 00:19:36.322 [2024-07-15 11:38:13.752535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.322 [2024-07-15 11:38:13.752566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:36.322 [2024-07-15 11:38:13.756580] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15bf380) 00:19:36.322 [2024-07-15 11:38:13.756637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.322 [2024-07-15 11:38:13.756653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:36.322 [2024-07-15 11:38:13.761305] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15bf380) 00:19:36.322 [2024-07-15 11:38:13.761383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.322 [2024-07-15 11:38:13.761399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:36.322 [2024-07-15 11:38:13.766529] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15bf380) 00:19:36.322 [2024-07-15 11:38:13.766614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.322 [2024-07-15 11:38:13.766631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:36.322 [2024-07-15 11:38:13.769855] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15bf380) 00:19:36.322 [2024-07-15 11:38:13.769916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.322 [2024-07-15 11:38:13.769932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:36.322 [2024-07-15 11:38:13.774690] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15bf380) 00:19:36.322 [2024-07-15 11:38:13.774748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.322 [2024-07-15 11:38:13.774763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:36.322 [2024-07-15 11:38:13.779134] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15bf380) 00:19:36.322 [2024-07-15 11:38:13.779187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.322 [2024-07-15 11:38:13.779202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:36.322 [2024-07-15 11:38:13.783754] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15bf380) 00:19:36.322 [2024-07-15 11:38:13.783808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.322 [2024-07-15 11:38:13.783823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:36.322 [2024-07-15 11:38:13.787873] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15bf380) 00:19:36.322 [2024-07-15 11:38:13.787933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.322 [2024-07-15 11:38:13.787948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:36.322 [2024-07-15 11:38:13.792334] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15bf380) 00:19:36.322 [2024-07-15 11:38:13.792397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.322 [2024-07-15 11:38:13.792419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:36.583 [2024-07-15 11:38:13.797340] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15bf380) 00:19:36.583 [2024-07-15 11:38:13.797406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.583 [2024-07-15 11:38:13.797422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:36.583 [2024-07-15 11:38:13.801820] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15bf380) 00:19:36.583 [2024-07-15 11:38:13.801918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.583 [2024-07-15 11:38:13.801934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:36.583 [2024-07-15 11:38:13.806064] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15bf380) 00:19:36.583 [2024-07-15 11:38:13.806135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.583 [2024-07-15 11:38:13.806151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:36.583 [2024-07-15 11:38:13.811025] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15bf380) 00:19:36.583 [2024-07-15 11:38:13.811081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.583 [2024-07-15 11:38:13.811096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:36.583 [2024-07-15 11:38:13.814820] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15bf380) 00:19:36.583 [2024-07-15 11:38:13.814871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.583 [2024-07-15 11:38:13.814886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:36.583 [2024-07-15 11:38:13.819117] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15bf380) 00:19:36.583 [2024-07-15 11:38:13.819168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.583 [2024-07-15 11:38:13.819182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:36.583 [2024-07-15 11:38:13.824059] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15bf380) 00:19:36.583 [2024-07-15 11:38:13.824117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.583 [2024-07-15 11:38:13.824133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:36.583 [2024-07-15 11:38:13.827230] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15bf380) 00:19:36.583 [2024-07-15 11:38:13.827274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.583 [2024-07-15 11:38:13.827289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:36.583 [2024-07-15 11:38:13.831819] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15bf380) 00:19:36.583 [2024-07-15 11:38:13.831865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.583 [2024-07-15 11:38:13.831879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:36.583 [2024-07-15 11:38:13.836738] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15bf380) 00:19:36.583 [2024-07-15 11:38:13.836784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.583 [2024-07-15 11:38:13.836798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:36.583 [2024-07-15 11:38:13.841325] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15bf380) 00:19:36.583 [2024-07-15 11:38:13.841373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.583 [2024-07-15 11:38:13.841388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:36.583 [2024-07-15 11:38:13.845286] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15bf380) 00:19:36.583 [2024-07-15 11:38:13.845334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.583 [2024-07-15 11:38:13.845348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:36.583 [2024-07-15 11:38:13.850143] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15bf380) 00:19:36.583 [2024-07-15 11:38:13.850195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.583 [2024-07-15 11:38:13.850210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:36.583 [2024-07-15 11:38:13.854080] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15bf380) 00:19:36.583 [2024-07-15 11:38:13.854126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.583 [2024-07-15 11:38:13.854141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:36.583 [2024-07-15 11:38:13.858166] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15bf380) 00:19:36.584 [2024-07-15 11:38:13.858213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.584 [2024-07-15 11:38:13.858228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:36.584 [2024-07-15 11:38:13.862478] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15bf380) 00:19:36.584 [2024-07-15 11:38:13.862524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.584 [2024-07-15 11:38:13.862539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:36.584 [2024-07-15 11:38:13.866442] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15bf380) 00:19:36.584 [2024-07-15 11:38:13.866493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.584 [2024-07-15 11:38:13.866508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:36.584 [2024-07-15 11:38:13.871308] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15bf380) 00:19:36.584 [2024-07-15 11:38:13.871356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.584 [2024-07-15 11:38:13.871371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:36.584 [2024-07-15 11:38:13.875521] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15bf380) 00:19:36.584 [2024-07-15 11:38:13.875579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.584 [2024-07-15 11:38:13.875594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:36.584 [2024-07-15 11:38:13.880057] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15bf380) 00:19:36.584 [2024-07-15 11:38:13.880103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.584 [2024-07-15 11:38:13.880118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:36.584 [2024-07-15 11:38:13.884809] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15bf380) 00:19:36.584 [2024-07-15 11:38:13.884855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.584 [2024-07-15 11:38:13.884870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:36.584 [2024-07-15 11:38:13.889033] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15bf380) 00:19:36.584 [2024-07-15 11:38:13.889079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.584 [2024-07-15 11:38:13.889093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:36.584 [2024-07-15 11:38:13.893610] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15bf380) 00:19:36.584 [2024-07-15 11:38:13.893656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.584 [2024-07-15 11:38:13.893670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:36.584 [2024-07-15 11:38:13.897170] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15bf380) 00:19:36.584 [2024-07-15 11:38:13.897215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.584 [2024-07-15 11:38:13.897229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:36.584 [2024-07-15 11:38:13.901822] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15bf380) 00:19:36.584 [2024-07-15 11:38:13.901869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.584 [2024-07-15 11:38:13.901894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:36.584 [2024-07-15 11:38:13.905195] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15bf380) 00:19:36.584 [2024-07-15 11:38:13.905242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.584 [2024-07-15 11:38:13.905257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:36.584 [2024-07-15 11:38:13.909977] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15bf380) 00:19:36.584 [2024-07-15 11:38:13.910027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.584 [2024-07-15 11:38:13.910042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:36.584 [2024-07-15 11:38:13.914974] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15bf380) 00:19:36.584 [2024-07-15 11:38:13.915019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.584 [2024-07-15 11:38:13.915033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:36.584 [2024-07-15 11:38:13.919743] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15bf380) 00:19:36.584 [2024-07-15 11:38:13.919791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.584 [2024-07-15 11:38:13.919806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:36.584 [2024-07-15 11:38:13.923068] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15bf380) 00:19:36.584 [2024-07-15 11:38:13.923111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.584 [2024-07-15 11:38:13.923125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:36.584 [2024-07-15 11:38:13.927612] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15bf380) 00:19:36.584 [2024-07-15 11:38:13.927660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.584 [2024-07-15 11:38:13.927676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:36.584 [2024-07-15 11:38:13.931718] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15bf380) 00:19:36.584 [2024-07-15 11:38:13.931762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.584 [2024-07-15 11:38:13.931776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:36.584 [2024-07-15 11:38:13.936507] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15bf380) 00:19:36.584 [2024-07-15 11:38:13.936565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.584 [2024-07-15 11:38:13.936581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:36.584 [2024-07-15 11:38:13.939946] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15bf380) 00:19:36.584 [2024-07-15 11:38:13.939989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.584 [2024-07-15 11:38:13.940004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:36.584 [2024-07-15 11:38:13.944223] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15bf380) 00:19:36.584 [2024-07-15 11:38:13.944270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.584 [2024-07-15 11:38:13.944284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:36.584 [2024-07-15 11:38:13.948746] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15bf380) 00:19:36.584 [2024-07-15 11:38:13.948789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.584 [2024-07-15 11:38:13.948804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:36.584 [2024-07-15 11:38:13.952451] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15bf380) 00:19:36.584 [2024-07-15 11:38:13.952497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.584 [2024-07-15 11:38:13.952512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:36.584 [2024-07-15 11:38:13.956912] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15bf380) 00:19:36.584 [2024-07-15 11:38:13.956958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.584 [2024-07-15 11:38:13.956973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:36.584 [2024-07-15 11:38:13.961421] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15bf380) 00:19:36.584 [2024-07-15 11:38:13.961486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.584 [2024-07-15 11:38:13.961508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:36.584 [2024-07-15 11:38:13.966764] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15bf380) 00:19:36.584 [2024-07-15 11:38:13.966822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.584 [2024-07-15 11:38:13.966846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:36.584 [2024-07-15 11:38:13.970275] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15bf380) 00:19:36.584 [2024-07-15 11:38:13.970318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.584 [2024-07-15 11:38:13.970333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:36.584 [2024-07-15 11:38:13.974512] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15bf380) 00:19:36.584 [2024-07-15 11:38:13.974567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.584 [2024-07-15 11:38:13.974583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:36.584 [2024-07-15 11:38:13.979380] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15bf380) 00:19:36.584 [2024-07-15 11:38:13.979431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.584 [2024-07-15 11:38:13.979446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:36.584 [2024-07-15 11:38:13.982864] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15bf380) 00:19:36.585 [2024-07-15 11:38:13.982916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.585 [2024-07-15 11:38:13.982931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:36.585 [2024-07-15 11:38:13.987321] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15bf380) 00:19:36.585 [2024-07-15 11:38:13.987375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.585 [2024-07-15 11:38:13.987390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:36.585 [2024-07-15 11:38:13.992728] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15bf380) 00:19:36.585 [2024-07-15 11:38:13.992780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.585 [2024-07-15 11:38:13.992795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:36.585 [2024-07-15 11:38:13.997678] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15bf380) 00:19:36.585 [2024-07-15 11:38:13.997728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.585 [2024-07-15 11:38:13.997743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:36.585 [2024-07-15 11:38:14.002682] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15bf380) 00:19:36.585 [2024-07-15 11:38:14.002732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.585 [2024-07-15 11:38:14.002746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:36.585 [2024-07-15 11:38:14.005851] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15bf380) 00:19:36.585 [2024-07-15 11:38:14.005904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.585 [2024-07-15 11:38:14.005919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:36.585 [2024-07-15 11:38:14.010334] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15bf380) 00:19:36.585 [2024-07-15 11:38:14.010379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.585 [2024-07-15 11:38:14.010395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:36.585 [2024-07-15 11:38:14.015515] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15bf380) 00:19:36.585 [2024-07-15 11:38:14.015575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.585 [2024-07-15 11:38:14.015591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:36.585 [2024-07-15 11:38:14.019249] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15bf380) 00:19:36.585 [2024-07-15 11:38:14.019300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.585 [2024-07-15 11:38:14.019315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:36.585 [2024-07-15 11:38:14.023789] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15bf380) 00:19:36.585 [2024-07-15 11:38:14.023852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.585 [2024-07-15 11:38:14.023868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:36.585 [2024-07-15 11:38:14.028855] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15bf380) 00:19:36.585 [2024-07-15 11:38:14.028913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.585 [2024-07-15 11:38:14.028928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:36.585 [2024-07-15 11:38:14.034270] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15bf380) 00:19:36.585 [2024-07-15 11:38:14.034319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.585 [2024-07-15 11:38:14.034334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:36.585 [2024-07-15 11:38:14.038885] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15bf380) 00:19:36.585 [2024-07-15 11:38:14.038930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.585 [2024-07-15 11:38:14.038944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:36.585 [2024-07-15 11:38:14.042200] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15bf380) 00:19:36.585 [2024-07-15 11:38:14.042243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.585 [2024-07-15 11:38:14.042258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:36.585 [2024-07-15 11:38:14.047524] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15bf380) 00:19:36.585 [2024-07-15 11:38:14.047586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.585 [2024-07-15 11:38:14.047601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:36.585 [2024-07-15 11:38:14.053376] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15bf380) 00:19:36.585 [2024-07-15 11:38:14.053449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.585 [2024-07-15 11:38:14.053467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:36.844 [2024-07-15 11:38:14.057374] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15bf380) 00:19:36.844 [2024-07-15 11:38:14.057423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.844 [2024-07-15 11:38:14.057439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:36.844 [2024-07-15 11:38:14.062333] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15bf380) 00:19:36.844 [2024-07-15 11:38:14.062405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.844 [2024-07-15 11:38:14.062421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:36.844 [2024-07-15 11:38:14.068217] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15bf380) 00:19:36.844 [2024-07-15 11:38:14.068293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.844 [2024-07-15 11:38:14.068308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:36.844 [2024-07-15 11:38:14.072915] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15bf380) 00:19:36.844 [2024-07-15 11:38:14.072973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.844 [2024-07-15 11:38:14.072987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:36.844 [2024-07-15 11:38:14.076556] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15bf380) 00:19:36.844 [2024-07-15 11:38:14.076602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.844 [2024-07-15 11:38:14.076616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:36.844 [2024-07-15 11:38:14.081491] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15bf380) 00:19:36.844 [2024-07-15 11:38:14.081558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.844 [2024-07-15 11:38:14.081576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:36.844 [2024-07-15 11:38:14.085984] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15bf380) 00:19:36.844 [2024-07-15 11:38:14.086035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.844 [2024-07-15 11:38:14.086050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:36.844 [2024-07-15 11:38:14.089975] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15bf380) 00:19:36.844 [2024-07-15 11:38:14.090025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.844 [2024-07-15 11:38:14.090040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:36.844 [2024-07-15 11:38:14.094443] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15bf380) 00:19:36.844 [2024-07-15 11:38:14.094493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.844 [2024-07-15 11:38:14.094509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:36.844 [2024-07-15 11:38:14.098767] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15bf380) 00:19:36.844 [2024-07-15 11:38:14.098815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.844 [2024-07-15 11:38:14.098831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:36.844 [2024-07-15 11:38:14.103301] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15bf380) 00:19:36.844 [2024-07-15 11:38:14.103376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.844 [2024-07-15 11:38:14.103391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:36.844 [2024-07-15 11:38:14.107739] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15bf380) 00:19:36.844 [2024-07-15 11:38:14.107811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.844 [2024-07-15 11:38:14.107827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:36.844 [2024-07-15 11:38:14.111907] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15bf380) 00:19:36.844 [2024-07-15 11:38:14.111975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.844 [2024-07-15 11:38:14.111990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:36.844 [2024-07-15 11:38:14.116637] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15bf380) 00:19:36.844 [2024-07-15 11:38:14.116705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.844 [2024-07-15 11:38:14.116721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:36.844 [2024-07-15 11:38:14.120586] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15bf380) 00:19:36.844 [2024-07-15 11:38:14.120651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.844 [2024-07-15 11:38:14.120666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:36.844 [2024-07-15 11:38:14.124808] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15bf380) 00:19:36.844 [2024-07-15 11:38:14.124869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.844 [2024-07-15 11:38:14.124883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:36.844 [2024-07-15 11:38:14.129151] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15bf380) 00:19:36.844 [2024-07-15 11:38:14.129202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.844 [2024-07-15 11:38:14.129217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:36.844 [2024-07-15 11:38:14.133622] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15bf380) 00:19:36.844 [2024-07-15 11:38:14.133694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.844 [2024-07-15 11:38:14.133709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:36.845 [2024-07-15 11:38:14.137981] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15bf380) 00:19:36.845 [2024-07-15 11:38:14.138048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.845 [2024-07-15 11:38:14.138064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:36.845 [2024-07-15 11:38:14.142392] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15bf380) 00:19:36.845 [2024-07-15 11:38:14.142446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.845 [2024-07-15 11:38:14.142461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:36.845 [2024-07-15 11:38:14.146366] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15bf380) 00:19:36.845 [2024-07-15 11:38:14.146415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.845 [2024-07-15 11:38:14.146431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:36.845 [2024-07-15 11:38:14.150405] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15bf380) 00:19:36.845 [2024-07-15 11:38:14.150450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.845 [2024-07-15 11:38:14.150464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:36.845 [2024-07-15 11:38:14.154088] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15bf380) 00:19:36.845 [2024-07-15 11:38:14.154141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.845 [2024-07-15 11:38:14.154156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:36.845 [2024-07-15 11:38:14.158431] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15bf380) 00:19:36.845 [2024-07-15 11:38:14.158507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.845 [2024-07-15 11:38:14.158523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:36.845 [2024-07-15 11:38:14.163282] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15bf380) 00:19:36.845 [2024-07-15 11:38:14.163354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.845 [2024-07-15 11:38:14.163370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:36.845 [2024-07-15 11:38:14.167698] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15bf380) 00:19:36.845 [2024-07-15 11:38:14.167769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.845 [2024-07-15 11:38:14.167784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:36.845 [2024-07-15 11:38:14.171396] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15bf380) 00:19:36.845 [2024-07-15 11:38:14.171464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.845 [2024-07-15 11:38:14.171479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:36.845 [2024-07-15 11:38:14.176343] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15bf380) 00:19:36.845 [2024-07-15 11:38:14.176422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.845 [2024-07-15 11:38:14.176438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:36.845 [2024-07-15 11:38:14.181167] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15bf380) 00:19:36.845 [2024-07-15 11:38:14.181242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.845 [2024-07-15 11:38:14.181258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:36.845 [2024-07-15 11:38:14.185225] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15bf380) 00:19:36.845 [2024-07-15 11:38:14.185299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.845 [2024-07-15 11:38:14.185314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:36.845 [2024-07-15 11:38:14.191179] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15bf380) 00:19:36.845 [2024-07-15 11:38:14.191257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.845 [2024-07-15 11:38:14.191272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:36.845 [2024-07-15 11:38:14.197249] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15bf380) 00:19:36.845 [2024-07-15 11:38:14.197325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.845 [2024-07-15 11:38:14.197341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:36.845 [2024-07-15 11:38:14.203028] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15bf380) 00:19:36.845 [2024-07-15 11:38:14.203105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.845 [2024-07-15 11:38:14.203121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:36.845 [2024-07-15 11:38:14.206044] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15bf380) 00:19:36.845 [2024-07-15 11:38:14.206103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.845 [2024-07-15 11:38:14.206118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:36.845 [2024-07-15 11:38:14.211579] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15bf380) 00:19:36.845 [2024-07-15 11:38:14.211648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.845 [2024-07-15 11:38:14.211663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:36.845 [2024-07-15 11:38:14.214975] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15bf380) 00:19:36.845 [2024-07-15 11:38:14.215017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.845 [2024-07-15 11:38:14.215031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:36.845 [2024-07-15 11:38:14.219722] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15bf380) 00:19:36.845 [2024-07-15 11:38:14.219769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.845 [2024-07-15 11:38:14.219783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:36.845 [2024-07-15 11:38:14.223777] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15bf380) 00:19:36.845 [2024-07-15 11:38:14.223821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.845 [2024-07-15 11:38:14.223836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:36.845 [2024-07-15 11:38:14.228239] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15bf380) 00:19:36.845 [2024-07-15 11:38:14.228291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.845 [2024-07-15 11:38:14.228306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:36.845 [2024-07-15 11:38:14.232206] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15bf380) 00:19:36.845 [2024-07-15 11:38:14.232254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.845 [2024-07-15 11:38:14.232269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:36.845 [2024-07-15 11:38:14.237190] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15bf380) 00:19:36.845 [2024-07-15 11:38:14.237240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.845 [2024-07-15 11:38:14.237254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:36.845 [2024-07-15 11:38:14.242220] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15bf380) 00:19:36.845 [2024-07-15 11:38:14.242271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.845 [2024-07-15 11:38:14.242286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:36.845 [2024-07-15 11:38:14.245992] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15bf380) 00:19:36.845 [2024-07-15 11:38:14.246037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.845 [2024-07-15 11:38:14.246053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:36.845 [2024-07-15 11:38:14.250643] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15bf380) 00:19:36.845 [2024-07-15 11:38:14.250689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.845 [2024-07-15 11:38:14.250703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:36.845 [2024-07-15 11:38:14.254719] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15bf380) 00:19:36.845 [2024-07-15 11:38:14.254790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.845 [2024-07-15 11:38:14.254806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:36.845 [2024-07-15 11:38:14.258813] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15bf380) 00:19:36.845 [2024-07-15 11:38:14.258881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.845 [2024-07-15 11:38:14.258895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:36.845 [2024-07-15 11:38:14.263587] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15bf380) 00:19:36.845 [2024-07-15 11:38:14.263644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.845 [2024-07-15 11:38:14.263659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:36.846 [2024-07-15 11:38:14.267355] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15bf380) 00:19:36.846 [2024-07-15 11:38:14.267404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.846 [2024-07-15 11:38:14.267419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:36.846 [2024-07-15 11:38:14.272190] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15bf380) 00:19:36.846 [2024-07-15 11:38:14.272250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.846 [2024-07-15 11:38:14.272265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:36.846 [2024-07-15 11:38:14.277891] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15bf380) 00:19:36.846 [2024-07-15 11:38:14.277948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.846 [2024-07-15 11:38:14.277964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:36.846 [2024-07-15 11:38:14.282637] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15bf380) 00:19:36.846 [2024-07-15 11:38:14.282692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.846 [2024-07-15 11:38:14.282708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:36.846 [2024-07-15 11:38:14.286062] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15bf380) 00:19:36.846 [2024-07-15 11:38:14.286118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.846 [2024-07-15 11:38:14.286132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:36.846 [2024-07-15 11:38:14.290596] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15bf380) 00:19:36.846 [2024-07-15 11:38:14.290665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.846 [2024-07-15 11:38:14.290680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:36.846 [2024-07-15 11:38:14.295541] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15bf380) 00:19:36.846 [2024-07-15 11:38:14.295643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.846 [2024-07-15 11:38:14.295658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:36.846 [2024-07-15 11:38:14.298918] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15bf380) 00:19:36.846 [2024-07-15 11:38:14.298961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.846 [2024-07-15 11:38:14.298976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:36.846 [2024-07-15 11:38:14.303357] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15bf380) 00:19:36.846 [2024-07-15 11:38:14.303402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.846 [2024-07-15 11:38:14.303417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:36.846 [2024-07-15 11:38:14.307915] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15bf380) 00:19:36.846 [2024-07-15 11:38:14.307967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.846 [2024-07-15 11:38:14.307981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:36.846 [2024-07-15 11:38:14.312641] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15bf380) 00:19:36.846 [2024-07-15 11:38:14.312691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.846 [2024-07-15 11:38:14.312707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:36.846 [2024-07-15 11:38:14.317285] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15bf380) 00:19:36.846 [2024-07-15 11:38:14.317334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.846 [2024-07-15 11:38:14.317350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:37.106 [2024-07-15 11:38:14.320855] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15bf380) 00:19:37.106 [2024-07-15 11:38:14.320903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.106 [2024-07-15 11:38:14.320918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:37.106 [2024-07-15 11:38:14.325209] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15bf380) 00:19:37.106 [2024-07-15 11:38:14.325259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.106 [2024-07-15 11:38:14.325274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:37.106 [2024-07-15 11:38:14.329958] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15bf380) 00:19:37.106 [2024-07-15 11:38:14.330021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.106 [2024-07-15 11:38:14.330037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:37.106 [2024-07-15 11:38:14.334111] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15bf380) 00:19:37.106 [2024-07-15 11:38:14.334157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.106 [2024-07-15 11:38:14.334172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:37.106 [2024-07-15 11:38:14.338930] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15bf380) 00:19:37.106 [2024-07-15 11:38:14.338976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.106 [2024-07-15 11:38:14.338991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:37.106 [2024-07-15 11:38:14.342635] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15bf380) 00:19:37.106 [2024-07-15 11:38:14.342678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.106 [2024-07-15 11:38:14.342693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:37.106 [2024-07-15 11:38:14.347004] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15bf380) 00:19:37.106 [2024-07-15 11:38:14.347048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.106 [2024-07-15 11:38:14.347062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:37.106 [2024-07-15 11:38:14.351615] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15bf380) 00:19:37.106 [2024-07-15 11:38:14.351661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.106 [2024-07-15 11:38:14.351676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:37.106 [2024-07-15 11:38:14.355151] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15bf380) 00:19:37.106 [2024-07-15 11:38:14.355198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.106 [2024-07-15 11:38:14.355212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:37.106 [2024-07-15 11:38:14.359531] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15bf380) 00:19:37.106 [2024-07-15 11:38:14.359591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.106 [2024-07-15 11:38:14.359607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:37.106 [2024-07-15 11:38:14.364317] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15bf380) 00:19:37.106 [2024-07-15 11:38:14.364368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.106 [2024-07-15 11:38:14.364382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:37.106 [2024-07-15 11:38:14.368215] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15bf380) 00:19:37.106 [2024-07-15 11:38:14.368260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.106 [2024-07-15 11:38:14.368274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:37.106 [2024-07-15 11:38:14.372504] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15bf380) 00:19:37.106 [2024-07-15 11:38:14.372563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.106 [2024-07-15 11:38:14.372579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:37.106 [2024-07-15 11:38:14.376288] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15bf380) 00:19:37.106 [2024-07-15 11:38:14.376331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.106 [2024-07-15 11:38:14.376345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:37.106 [2024-07-15 11:38:14.380043] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15bf380) 00:19:37.106 [2024-07-15 11:38:14.380086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.106 [2024-07-15 11:38:14.380099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:37.106 [2024-07-15 11:38:14.384542] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15bf380) 00:19:37.106 [2024-07-15 11:38:14.384601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.106 [2024-07-15 11:38:14.384615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:37.106 [2024-07-15 11:38:14.388839] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15bf380) 00:19:37.106 [2024-07-15 11:38:14.388886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.106 [2024-07-15 11:38:14.388901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:37.106 [2024-07-15 11:38:14.393165] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15bf380) 00:19:37.106 [2024-07-15 11:38:14.393213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.106 [2024-07-15 11:38:14.393227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:37.106 [2024-07-15 11:38:14.397340] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15bf380) 00:19:37.106 [2024-07-15 11:38:14.397388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.106 [2024-07-15 11:38:14.397403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:37.106 [2024-07-15 11:38:14.402272] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15bf380) 00:19:37.106 [2024-07-15 11:38:14.402318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.106 [2024-07-15 11:38:14.402332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:37.106 [2024-07-15 11:38:14.406324] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15bf380) 00:19:37.106 [2024-07-15 11:38:14.406375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.106 [2024-07-15 11:38:14.406391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:37.106 [2024-07-15 11:38:14.410395] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15bf380) 00:19:37.106 [2024-07-15 11:38:14.410439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.106 [2024-07-15 11:38:14.410454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:37.106 [2024-07-15 11:38:14.415239] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15bf380) 00:19:37.106 [2024-07-15 11:38:14.415285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.106 [2024-07-15 11:38:14.415300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:37.106 [2024-07-15 11:38:14.419340] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15bf380) 00:19:37.106 [2024-07-15 11:38:14.419385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.106 [2024-07-15 11:38:14.419399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:37.106 [2024-07-15 11:38:14.424498] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15bf380) 00:19:37.106 [2024-07-15 11:38:14.424558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.106 [2024-07-15 11:38:14.424575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:37.106 [2024-07-15 11:38:14.429966] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15bf380) 00:19:37.106 [2024-07-15 11:38:14.430012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.106 [2024-07-15 11:38:14.430026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:37.106 [2024-07-15 11:38:14.434146] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15bf380) 00:19:37.106 [2024-07-15 11:38:14.434189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.106 [2024-07-15 11:38:14.434203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:37.107 [2024-07-15 11:38:14.437914] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15bf380) 00:19:37.107 [2024-07-15 11:38:14.437958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.107 [2024-07-15 11:38:14.437972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:37.107 [2024-07-15 11:38:14.442540] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15bf380) 00:19:37.107 [2024-07-15 11:38:14.442599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.107 [2024-07-15 11:38:14.442614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:37.107 [2024-07-15 11:38:14.447517] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15bf380) 00:19:37.107 [2024-07-15 11:38:14.447578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.107 [2024-07-15 11:38:14.447593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:37.107 [2024-07-15 11:38:14.451085] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15bf380) 00:19:37.107 [2024-07-15 11:38:14.451127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.107 [2024-07-15 11:38:14.451141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:37.107 [2024-07-15 11:38:14.455432] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15bf380) 00:19:37.107 [2024-07-15 11:38:14.455477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.107 [2024-07-15 11:38:14.455491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:37.107 [2024-07-15 11:38:14.460463] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15bf380) 00:19:37.107 [2024-07-15 11:38:14.460508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.107 [2024-07-15 11:38:14.460523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:37.107 [2024-07-15 11:38:14.464214] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15bf380) 00:19:37.107 [2024-07-15 11:38:14.464259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.107 [2024-07-15 11:38:14.464273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:37.107 [2024-07-15 11:38:14.468499] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15bf380) 00:19:37.107 [2024-07-15 11:38:14.468559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.107 [2024-07-15 11:38:14.468575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:37.107 [2024-07-15 11:38:14.473126] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15bf380) 00:19:37.107 [2024-07-15 11:38:14.473180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.107 [2024-07-15 11:38:14.473195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:37.107 [2024-07-15 11:38:14.477026] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15bf380) 00:19:37.107 [2024-07-15 11:38:14.477075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.107 [2024-07-15 11:38:14.477090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:37.107 [2024-07-15 11:38:14.481204] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15bf380) 00:19:37.107 [2024-07-15 11:38:14.481256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.107 [2024-07-15 11:38:14.481272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:37.107 [2024-07-15 11:38:14.485232] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15bf380) 00:19:37.107 [2024-07-15 11:38:14.485280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.107 [2024-07-15 11:38:14.485294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:37.107 [2024-07-15 11:38:14.489825] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15bf380) 00:19:37.107 [2024-07-15 11:38:14.489896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.107 [2024-07-15 11:38:14.489912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:37.107 [2024-07-15 11:38:14.494122] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15bf380) 00:19:37.107 [2024-07-15 11:38:14.494189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.107 [2024-07-15 11:38:14.494205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:37.107 [2024-07-15 11:38:14.498469] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15bf380) 00:19:37.107 [2024-07-15 11:38:14.498532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.107 [2024-07-15 11:38:14.498564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:37.107 [2024-07-15 11:38:14.502206] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15bf380) 00:19:37.107 [2024-07-15 11:38:14.502265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.107 [2024-07-15 11:38:14.502279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:37.107 [2024-07-15 11:38:14.507003] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15bf380) 00:19:37.107 [2024-07-15 11:38:14.507071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.107 [2024-07-15 11:38:14.507086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:37.107 [2024-07-15 11:38:14.511278] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15bf380) 00:19:37.107 [2024-07-15 11:38:14.511336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.107 [2024-07-15 11:38:14.511351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:37.107 [2024-07-15 11:38:14.515393] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15bf380) 00:19:37.107 [2024-07-15 11:38:14.515446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.107 [2024-07-15 11:38:14.515460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:37.107 [2024-07-15 11:38:14.519378] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15bf380) 00:19:37.107 [2024-07-15 11:38:14.519429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.107 [2024-07-15 11:38:14.519444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:37.107 [2024-07-15 11:38:14.523503] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15bf380) 00:19:37.107 [2024-07-15 11:38:14.523566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.107 [2024-07-15 11:38:14.523582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:37.107 [2024-07-15 11:38:14.528034] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15bf380) 00:19:37.107 [2024-07-15 11:38:14.528087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.107 [2024-07-15 11:38:14.528103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:37.107 [2024-07-15 11:38:14.532469] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15bf380) 00:19:37.107 [2024-07-15 11:38:14.532522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.107 [2024-07-15 11:38:14.532538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:37.107 [2024-07-15 11:38:14.536505] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15bf380) 00:19:37.107 [2024-07-15 11:38:14.536587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.107 [2024-07-15 11:38:14.536603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:37.107 00:19:37.107 Latency(us) 00:19:37.107 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:37.107 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:19:37.107 nvme0n1 : 2.00 7025.90 878.24 0.00 0.00 2273.13 692.60 9532.51 00:19:37.107 =================================================================================================================== 00:19:37.107 Total : 7025.90 878.24 0.00 0.00 2273.13 692.60 9532.51 00:19:37.107 0 00:19:37.107 11:38:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:19:37.107 11:38:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:19:37.107 11:38:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:19:37.107 11:38:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:19:37.107 | .driver_specific 00:19:37.107 | .nvme_error 00:19:37.107 | .status_code 00:19:37.107 | .command_transient_transport_error' 00:19:37.674 11:38:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 453 > 0 )) 00:19:37.674 11:38:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 93656 00:19:37.674 11:38:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 93656 ']' 00:19:37.674 11:38:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 93656 00:19:37.674 11:38:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:19:37.674 11:38:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:37.674 11:38:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 93656 00:19:37.674 11:38:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:19:37.674 11:38:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:19:37.674 killing process with pid 93656 00:19:37.674 11:38:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 93656' 00:19:37.674 11:38:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 93656 00:19:37.674 Received shutdown signal, test time was about 2.000000 seconds 00:19:37.674 00:19:37.674 Latency(us) 00:19:37.674 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:37.674 =================================================================================================================== 00:19:37.674 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:37.674 11:38:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 93656 00:19:37.674 11:38:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:19:37.674 11:38:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:19:37.674 11:38:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:19:37.674 11:38:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:19:37.674 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:19:37.674 11:38:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:19:37.674 11:38:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=93745 00:19:37.674 11:38:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 93745 /var/tmp/bperf.sock 00:19:37.674 11:38:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 93745 ']' 00:19:37.674 11:38:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:19:37.674 11:38:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:37.674 11:38:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:19:37.674 11:38:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:19:37.674 11:38:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:37.674 11:38:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:19:37.674 [2024-07-15 11:38:15.104444] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:19:37.674 [2024-07-15 11:38:15.104561] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93745 ] 00:19:37.933 [2024-07-15 11:38:15.240133] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:37.933 [2024-07-15 11:38:15.314077] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:37.933 11:38:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:37.933 11:38:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:19:37.933 11:38:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:19:37.933 11:38:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:19:38.192 11:38:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:19:38.192 11:38:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:38.192 11:38:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:19:38.450 11:38:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:38.451 11:38:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:19:38.451 11:38:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:19:38.709 nvme0n1 00:19:38.709 11:38:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:19:38.709 11:38:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:38.709 11:38:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:19:38.709 11:38:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:38.709 11:38:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:19:38.709 11:38:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:19:38.709 Running I/O for 2 seconds... 00:19:38.709 [2024-07-15 11:38:16.170648] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407880) with pdu=0x2000190f6458 00:19:38.709 [2024-07-15 11:38:16.171787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:9070 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:38.709 [2024-07-15 11:38:16.171840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:19:38.968 [2024-07-15 11:38:16.185765] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407880) with pdu=0x2000190e4de8 00:19:38.968 [2024-07-15 11:38:16.187634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:2821 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:38.968 [2024-07-15 11:38:16.187703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:19:38.968 [2024-07-15 11:38:16.194800] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407880) with pdu=0x2000190e01f8 00:19:38.968 [2024-07-15 11:38:16.195646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:10858 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:38.968 [2024-07-15 11:38:16.195689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:19:38.968 [2024-07-15 11:38:16.207126] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407880) with pdu=0x2000190f9b30 00:19:38.968 [2024-07-15 11:38:16.207970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:915 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:38.968 [2024-07-15 11:38:16.208017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:19:38.968 [2024-07-15 11:38:16.221541] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407880) with pdu=0x2000190e1710 00:19:38.968 [2024-07-15 11:38:16.223078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:11596 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:38.968 [2024-07-15 11:38:16.223127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:19:38.968 [2024-07-15 11:38:16.233009] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407880) with pdu=0x2000190fcdd0 00:19:38.968 [2024-07-15 11:38:16.234336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:6866 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:38.968 [2024-07-15 11:38:16.234380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:19:38.968 [2024-07-15 11:38:16.245004] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407880) with pdu=0x2000190e4578 00:19:38.968 [2024-07-15 11:38:16.246047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:14457 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:38.968 [2024-07-15 11:38:16.246096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:19:38.968 [2024-07-15 11:38:16.256749] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407880) with pdu=0x2000190f9b30 00:19:38.968 [2024-07-15 11:38:16.257633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:4130 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:38.968 [2024-07-15 11:38:16.257681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:19:38.968 [2024-07-15 11:38:16.269619] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407880) with pdu=0x2000190f35f0 00:19:38.968 [2024-07-15 11:38:16.270848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:13175 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:38.968 [2024-07-15 11:38:16.270898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:19:38.968 [2024-07-15 11:38:16.284676] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407880) with pdu=0x2000190e4de8 00:19:38.968 [2024-07-15 11:38:16.286592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:1209 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:38.968 [2024-07-15 11:38:16.286654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:19:38.968 [2024-07-15 11:38:16.293601] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407880) with pdu=0x2000190e3498 00:19:38.968 [2024-07-15 11:38:16.294493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:23063 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:38.968 [2024-07-15 11:38:16.294537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:19:38.968 [2024-07-15 11:38:16.308437] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407880) with pdu=0x2000190f1ca0 00:19:38.968 [2024-07-15 11:38:16.310019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:1619 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:38.968 [2024-07-15 11:38:16.310060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:19:38.968 [2024-07-15 11:38:16.319410] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407880) with pdu=0x2000190f8618 00:19:38.968 [2024-07-15 11:38:16.320443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:16558 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:38.968 [2024-07-15 11:38:16.320490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:19:38.968 [2024-07-15 11:38:16.333286] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407880) with pdu=0x2000190fc560 00:19:38.968 [2024-07-15 11:38:16.334912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:17638 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:38.968 [2024-07-15 11:38:16.334962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:19:38.968 [2024-07-15 11:38:16.345014] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407880) with pdu=0x2000190e4de8 00:19:38.968 [2024-07-15 11:38:16.346415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:5949 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:38.968 [2024-07-15 11:38:16.346467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:19:38.968 [2024-07-15 11:38:16.357310] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407880) with pdu=0x2000190e7818 00:19:38.968 [2024-07-15 11:38:16.358654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:845 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:38.968 [2024-07-15 11:38:16.358701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:19:38.968 [2024-07-15 11:38:16.372591] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407880) with pdu=0x2000190fcdd0 00:19:38.968 [2024-07-15 11:38:16.374684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:1957 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:38.968 [2024-07-15 11:38:16.374745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:19:38.968 [2024-07-15 11:38:16.381612] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407880) with pdu=0x2000190f6458 00:19:38.968 [2024-07-15 11:38:16.382638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:15097 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:38.968 [2024-07-15 11:38:16.382682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:19:38.968 [2024-07-15 11:38:16.397007] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407880) with pdu=0x2000190e27f0 00:19:38.968 [2024-07-15 11:38:16.398702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:20181 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:38.968 [2024-07-15 11:38:16.398757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:19:38.968 [2024-07-15 11:38:16.408579] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407880) with pdu=0x2000190eaab8 00:19:38.968 [2024-07-15 11:38:16.410018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:9892 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:38.968 [2024-07-15 11:38:16.410069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:19:38.968 [2024-07-15 11:38:16.420598] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407880) with pdu=0x2000190ec840 00:19:38.968 [2024-07-15 11:38:16.421986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:15612 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:38.968 [2024-07-15 11:38:16.422039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:19:38.968 [2024-07-15 11:38:16.435675] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407880) with pdu=0x2000190f7100 00:19:38.968 [2024-07-15 11:38:16.437743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:12286 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:38.968 [2024-07-15 11:38:16.437800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:19:39.226 [2024-07-15 11:38:16.444623] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407880) with pdu=0x2000190f31b8 00:19:39.226 [2024-07-15 11:38:16.445707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:18616 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.227 [2024-07-15 11:38:16.445757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:19:39.227 [2024-07-15 11:38:16.459620] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407880) with pdu=0x2000190fc128 00:19:39.227 [2024-07-15 11:38:16.461196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:18526 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.227 [2024-07-15 11:38:16.461248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:39.227 [2024-07-15 11:38:16.471251] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407880) with pdu=0x2000190eb328 00:19:39.227 [2024-07-15 11:38:16.472673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:4420 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.227 [2024-07-15 11:38:16.472715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:19:39.227 [2024-07-15 11:38:16.482772] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407880) with pdu=0x2000190ef270 00:19:39.227 [2024-07-15 11:38:16.484009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:9330 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.227 [2024-07-15 11:38:16.484053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:39.227 [2024-07-15 11:38:16.496212] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407880) with pdu=0x2000190efae0 00:19:39.227 [2024-07-15 11:38:16.497953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:24495 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.227 [2024-07-15 11:38:16.497997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:19:39.227 [2024-07-15 11:38:16.506079] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407880) with pdu=0x2000190fd640 00:19:39.227 [2024-07-15 11:38:16.506836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:15089 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.227 [2024-07-15 11:38:16.506877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:39.227 [2024-07-15 11:38:16.518347] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407880) with pdu=0x2000190e6b70 00:19:39.227 [2024-07-15 11:38:16.519593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:5889 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.227 [2024-07-15 11:38:16.519633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:19:39.227 [2024-07-15 11:38:16.530511] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407880) with pdu=0x2000190f35f0 00:19:39.227 [2024-07-15 11:38:16.531268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:24132 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.227 [2024-07-15 11:38:16.531311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:19:39.227 [2024-07-15 11:38:16.541963] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407880) with pdu=0x2000190f2d80 00:19:39.227 [2024-07-15 11:38:16.542610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:14197 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.227 [2024-07-15 11:38:16.542651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:19:39.227 [2024-07-15 11:38:16.554392] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407880) with pdu=0x2000190eb760 00:19:39.227 [2024-07-15 11:38:16.555142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:320 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.227 [2024-07-15 11:38:16.555183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:19:39.227 [2024-07-15 11:38:16.566106] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407880) with pdu=0x2000190fcdd0 00:19:39.227 [2024-07-15 11:38:16.567304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:21666 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.227 [2024-07-15 11:38:16.567347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:39.227 [2024-07-15 11:38:16.576693] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407880) with pdu=0x2000190e88f8 00:19:39.227 [2024-07-15 11:38:16.577455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:25296 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.227 [2024-07-15 11:38:16.577503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:19:39.227 [2024-07-15 11:38:16.591571] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407880) with pdu=0x2000190e4de8 00:19:39.227 [2024-07-15 11:38:16.593022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:25266 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.227 [2024-07-15 11:38:16.593069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:19:39.227 [2024-07-15 11:38:16.602931] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407880) with pdu=0x2000190fbcf0 00:19:39.227 [2024-07-15 11:38:16.604139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11429 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.227 [2024-07-15 11:38:16.604183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:19:39.227 [2024-07-15 11:38:16.614717] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407880) with pdu=0x2000190f0ff8 00:19:39.227 [2024-07-15 11:38:16.615691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:2698 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.227 [2024-07-15 11:38:16.615733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:19:39.227 [2024-07-15 11:38:16.625751] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407880) with pdu=0x2000190fe2e8 00:19:39.227 [2024-07-15 11:38:16.626565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:6373 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.227 [2024-07-15 11:38:16.626606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:19:39.227 [2024-07-15 11:38:16.639868] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407880) with pdu=0x2000190f57b0 00:19:39.227 [2024-07-15 11:38:16.641150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:16640 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.227 [2024-07-15 11:38:16.641195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:19:39.227 [2024-07-15 11:38:16.651186] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407880) with pdu=0x2000190fe2e8 00:19:39.227 [2024-07-15 11:38:16.652460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:7905 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.227 [2024-07-15 11:38:16.652510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:19:39.227 [2024-07-15 11:38:16.663312] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407880) with pdu=0x2000190e6300 00:19:39.227 [2024-07-15 11:38:16.664474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:11529 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.227 [2024-07-15 11:38:16.664517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:19:39.227 [2024-07-15 11:38:16.675993] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407880) with pdu=0x2000190e6738 00:19:39.227 [2024-07-15 11:38:16.677317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:24256 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.227 [2024-07-15 11:38:16.677361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:19:39.227 [2024-07-15 11:38:16.688304] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407880) with pdu=0x2000190f2d80 00:19:39.227 [2024-07-15 11:38:16.689127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:14569 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.227 [2024-07-15 11:38:16.689170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:19:39.227 [2024-07-15 11:38:16.700059] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407880) with pdu=0x2000190f7100 00:19:39.227 [2024-07-15 11:38:16.701321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:10284 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.227 [2024-07-15 11:38:16.701364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:19:39.486 [2024-07-15 11:38:16.711923] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407880) with pdu=0x2000190f35f0 00:19:39.486 [2024-07-15 11:38:16.712928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:4705 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.486 [2024-07-15 11:38:16.712970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:19:39.486 [2024-07-15 11:38:16.723411] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407880) with pdu=0x2000190e6738 00:19:39.486 [2024-07-15 11:38:16.724253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:10253 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.486 [2024-07-15 11:38:16.724296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:19:39.486 [2024-07-15 11:38:16.735998] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407880) with pdu=0x2000190fa7d8 00:19:39.486 [2024-07-15 11:38:16.737158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:24037 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.486 [2024-07-15 11:38:16.737201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:19:39.486 [2024-07-15 11:38:16.748163] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407880) with pdu=0x2000190f3e60 00:19:39.486 [2024-07-15 11:38:16.748849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:24876 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.486 [2024-07-15 11:38:16.748892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:19:39.486 [2024-07-15 11:38:16.761962] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407880) with pdu=0x2000190e5220 00:19:39.486 [2024-07-15 11:38:16.763461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:14533 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.486 [2024-07-15 11:38:16.763506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:19:39.486 [2024-07-15 11:38:16.774827] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407880) with pdu=0x2000190efae0 00:19:39.486 [2024-07-15 11:38:16.776700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:12267 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.486 [2024-07-15 11:38:16.776745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:19:39.486 [2024-07-15 11:38:16.783577] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407880) with pdu=0x2000190edd58 00:19:39.486 [2024-07-15 11:38:16.784409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:3680 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.486 [2024-07-15 11:38:16.784447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:19:39.486 [2024-07-15 11:38:16.795799] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407880) with pdu=0x2000190ecc78 00:19:39.486 [2024-07-15 11:38:16.796670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:8436 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.486 [2024-07-15 11:38:16.796716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:19:39.486 [2024-07-15 11:38:16.810039] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407880) with pdu=0x2000190f6020 00:19:39.486 [2024-07-15 11:38:16.811571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:7253 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.486 [2024-07-15 11:38:16.811616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:19:39.486 [2024-07-15 11:38:16.822266] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407880) with pdu=0x2000190e01f8 00:19:39.486 [2024-07-15 11:38:16.823780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:17128 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.486 [2024-07-15 11:38:16.823820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:19:39.486 [2024-07-15 11:38:16.835824] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407880) with pdu=0x2000190fa7d8 00:19:39.486 [2024-07-15 11:38:16.837816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:25009 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.486 [2024-07-15 11:38:16.837859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:19:39.486 [2024-07-15 11:38:16.844323] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407880) with pdu=0x2000190eb760 00:19:39.486 [2024-07-15 11:38:16.845185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:12045 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.486 [2024-07-15 11:38:16.845226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:19:39.486 [2024-07-15 11:38:16.857919] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407880) with pdu=0x2000190f7100 00:19:39.486 [2024-07-15 11:38:16.859003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:10530 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.486 [2024-07-15 11:38:16.859047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:19:39.486 [2024-07-15 11:38:16.870170] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407880) with pdu=0x2000190e6738 00:19:39.486 [2024-07-15 11:38:16.871738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:15616 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.486 [2024-07-15 11:38:16.871786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:19:39.486 [2024-07-15 11:38:16.881788] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407880) with pdu=0x2000190e01f8 00:19:39.486 [2024-07-15 11:38:16.883094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:25124 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.486 [2024-07-15 11:38:16.883136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:19:39.486 [2024-07-15 11:38:16.893627] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407880) with pdu=0x2000190ed4e8 00:19:39.486 [2024-07-15 11:38:16.894886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:1385 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.486 [2024-07-15 11:38:16.894927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:19:39.486 [2024-07-15 11:38:16.905791] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407880) with pdu=0x2000190f6020 00:19:39.486 [2024-07-15 11:38:16.906563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:12591 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.486 [2024-07-15 11:38:16.906604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:19:39.486 [2024-07-15 11:38:16.916712] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407880) with pdu=0x2000190f2948 00:19:39.486 [2024-07-15 11:38:16.917680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:14242 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.486 [2024-07-15 11:38:16.917729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:19:39.486 [2024-07-15 11:38:16.931583] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407880) with pdu=0x2000190ed4e8 00:19:39.486 [2024-07-15 11:38:16.933175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:7920 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.486 [2024-07-15 11:38:16.933222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:19:39.486 [2024-07-15 11:38:16.943116] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407880) with pdu=0x2000190f2510 00:19:39.486 [2024-07-15 11:38:16.944575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:2671 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.486 [2024-07-15 11:38:16.944621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:19:39.486 [2024-07-15 11:38:16.955150] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407880) with pdu=0x2000190e7818 00:19:39.486 [2024-07-15 11:38:16.956443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:20078 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.486 [2024-07-15 11:38:16.956485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:19:39.746 [2024-07-15 11:38:16.967300] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407880) with pdu=0x2000190fb480 00:19:39.746 [2024-07-15 11:38:16.968110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:21418 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.746 [2024-07-15 11:38:16.968153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:19:39.746 [2024-07-15 11:38:16.979805] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407880) with pdu=0x2000190f1430 00:19:39.746 [2024-07-15 11:38:16.980782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:15936 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.746 [2024-07-15 11:38:16.980824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:19:39.746 [2024-07-15 11:38:16.991298] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407880) with pdu=0x2000190f7538 00:19:39.746 [2024-07-15 11:38:16.992158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:20212 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.746 [2024-07-15 11:38:16.992199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:19:39.746 [2024-07-15 11:38:17.002701] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407880) with pdu=0x2000190e2c28 00:19:39.746 [2024-07-15 11:38:17.003347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:190 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.746 [2024-07-15 11:38:17.003389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:19:39.746 [2024-07-15 11:38:17.016351] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407880) with pdu=0x2000190e4578 00:19:39.746 [2024-07-15 11:38:17.017832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:11111 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.746 [2024-07-15 11:38:17.017872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:19:39.746 [2024-07-15 11:38:17.027443] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407880) with pdu=0x2000190e38d0 00:19:39.746 [2024-07-15 11:38:17.028867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:19245 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.746 [2024-07-15 11:38:17.028909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:19:39.746 [2024-07-15 11:38:17.039199] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407880) with pdu=0x2000190e2c28 00:19:39.746 [2024-07-15 11:38:17.040533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:1088 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.746 [2024-07-15 11:38:17.040584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:19:39.746 [2024-07-15 11:38:17.051287] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407880) with pdu=0x2000190f0ff8 00:19:39.746 [2024-07-15 11:38:17.052143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:9202 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.746 [2024-07-15 11:38:17.052184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:19:39.746 [2024-07-15 11:38:17.062719] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407880) with pdu=0x2000190e12d8 00:19:39.746 [2024-07-15 11:38:17.063449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:9365 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.746 [2024-07-15 11:38:17.063490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:19:39.746 [2024-07-15 11:38:17.073556] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407880) with pdu=0x2000190e9e10 00:19:39.746 [2024-07-15 11:38:17.074510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:9153 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.746 [2024-07-15 11:38:17.074569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:19:39.746 [2024-07-15 11:38:17.088297] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407880) with pdu=0x2000190f20d8 00:19:39.746 [2024-07-15 11:38:17.089901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:25092 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.746 [2024-07-15 11:38:17.089943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:19:39.746 [2024-07-15 11:38:17.098954] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407880) with pdu=0x2000190f1868 00:19:39.746 [2024-07-15 11:38:17.099680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:1483 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.746 [2024-07-15 11:38:17.099718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:39.746 [2024-07-15 11:38:17.110619] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407880) with pdu=0x2000190fdeb0 00:19:39.746 [2024-07-15 11:38:17.111222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:9827 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.746 [2024-07-15 11:38:17.111263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:19:39.746 [2024-07-15 11:38:17.124793] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407880) with pdu=0x2000190ed0b0 00:19:39.746 [2024-07-15 11:38:17.126566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:23285 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.746 [2024-07-15 11:38:17.126625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:39.746 [2024-07-15 11:38:17.136418] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407880) with pdu=0x2000190e9e10 00:19:39.746 [2024-07-15 11:38:17.137950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:23658 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.746 [2024-07-15 11:38:17.137993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:39.746 [2024-07-15 11:38:17.148162] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407880) with pdu=0x2000190f81e0 00:19:39.746 [2024-07-15 11:38:17.149563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:5117 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.746 [2024-07-15 11:38:17.149602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:19:39.746 [2024-07-15 11:38:17.160330] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407880) with pdu=0x2000190df988 00:19:39.746 [2024-07-15 11:38:17.161741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:876 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.746 [2024-07-15 11:38:17.161781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:19:39.746 [2024-07-15 11:38:17.171788] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407880) with pdu=0x2000190e5220 00:19:39.746 [2024-07-15 11:38:17.173040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:1714 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.746 [2024-07-15 11:38:17.173081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:19:39.746 [2024-07-15 11:38:17.184132] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407880) with pdu=0x2000190f0bc0 00:19:39.746 [2024-07-15 11:38:17.185052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:19973 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.746 [2024-07-15 11:38:17.185093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:19:39.746 [2024-07-15 11:38:17.196149] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407880) with pdu=0x2000190f6cc8 00:19:39.746 [2024-07-15 11:38:17.197379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:17957 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.746 [2024-07-15 11:38:17.197419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:19:39.746 [2024-07-15 11:38:17.207541] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407880) with pdu=0x2000190f1868 00:19:39.746 [2024-07-15 11:38:17.208640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:7105 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.747 [2024-07-15 11:38:17.208682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:19:39.747 [2024-07-15 11:38:17.219452] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407880) with pdu=0x2000190feb58 00:19:40.006 [2024-07-15 11:38:17.220690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:3539 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.006 [2024-07-15 11:38:17.220739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:19:40.006 [2024-07-15 11:38:17.234746] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407880) with pdu=0x2000190de470 00:19:40.006 [2024-07-15 11:38:17.236707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:10714 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.006 [2024-07-15 11:38:17.236769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:19:40.006 [2024-07-15 11:38:17.243677] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407880) with pdu=0x2000190f0bc0 00:19:40.006 [2024-07-15 11:38:17.244625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14295 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.006 [2024-07-15 11:38:17.244670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:19:40.006 [2024-07-15 11:38:17.258188] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407880) with pdu=0x2000190f6890 00:19:40.006 [2024-07-15 11:38:17.259654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:20190 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.006 [2024-07-15 11:38:17.259697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:19:40.006 [2024-07-15 11:38:17.269508] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407880) with pdu=0x2000190ddc00 00:19:40.006 [2024-07-15 11:38:17.270959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:1568 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.006 [2024-07-15 11:38:17.271000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:19:40.006 [2024-07-15 11:38:17.281569] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407880) with pdu=0x2000190ea680 00:19:40.006 [2024-07-15 11:38:17.282515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:16736 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.006 [2024-07-15 11:38:17.282570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:19:40.006 [2024-07-15 11:38:17.292971] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407880) with pdu=0x2000190fbcf0 00:19:40.006 [2024-07-15 11:38:17.293799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:5781 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.006 [2024-07-15 11:38:17.293838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:19:40.006 [2024-07-15 11:38:17.304333] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407880) with pdu=0x2000190fb8b8 00:19:40.006 [2024-07-15 11:38:17.304966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:3502 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.006 [2024-07-15 11:38:17.305005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:19:40.006 [2024-07-15 11:38:17.319181] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407880) with pdu=0x2000190f7538 00:19:40.006 [2024-07-15 11:38:17.321138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:12883 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.006 [2024-07-15 11:38:17.321180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:19:40.006 [2024-07-15 11:38:17.327688] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407880) with pdu=0x2000190fd640 00:19:40.006 [2024-07-15 11:38:17.328482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:15996 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.006 [2024-07-15 11:38:17.328521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:19:40.006 [2024-07-15 11:38:17.342882] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407880) with pdu=0x2000190e5a90 00:19:40.006 [2024-07-15 11:38:17.344686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:13264 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.006 [2024-07-15 11:38:17.344733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:19:40.006 [2024-07-15 11:38:17.354353] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407880) with pdu=0x2000190f0350 00:19:40.006 [2024-07-15 11:38:17.355994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:20627 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.006 [2024-07-15 11:38:17.356037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:19:40.006 [2024-07-15 11:38:17.365826] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407880) with pdu=0x2000190ff3c8 00:19:40.006 [2024-07-15 11:38:17.367330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:2070 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.006 [2024-07-15 11:38:17.367374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:19:40.006 [2024-07-15 11:38:17.377269] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407880) with pdu=0x2000190e38d0 00:19:40.006 [2024-07-15 11:38:17.378607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:12675 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.006 [2024-07-15 11:38:17.378649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:19:40.006 [2024-07-15 11:38:17.388725] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407880) with pdu=0x2000190edd58 00:19:40.006 [2024-07-15 11:38:17.389911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:22249 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.006 [2024-07-15 11:38:17.389952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:19:40.006 [2024-07-15 11:38:17.400124] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407880) with pdu=0x2000190eaef0 00:19:40.006 [2024-07-15 11:38:17.401154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:7512 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.006 [2024-07-15 11:38:17.401196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:19:40.006 [2024-07-15 11:38:17.411629] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407880) with pdu=0x2000190e88f8 00:19:40.006 [2024-07-15 11:38:17.412478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:20993 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.006 [2024-07-15 11:38:17.412519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:19:40.006 [2024-07-15 11:38:17.426292] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407880) with pdu=0x2000190e01f8 00:19:40.006 [2024-07-15 11:38:17.427967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:3900 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.007 [2024-07-15 11:38:17.428010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:19:40.007 [2024-07-15 11:38:17.437615] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407880) with pdu=0x2000190fbcf0 00:19:40.007 [2024-07-15 11:38:17.439175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:7986 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.007 [2024-07-15 11:38:17.439224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:19:40.007 [2024-07-15 11:38:17.449369] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407880) with pdu=0x2000190f8618 00:19:40.007 [2024-07-15 11:38:17.450432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:9529 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.007 [2024-07-15 11:38:17.450480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:19:40.007 [2024-07-15 11:38:17.461384] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407880) with pdu=0x2000190ddc00 00:19:40.007 [2024-07-15 11:38:17.462607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:6560 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.007 [2024-07-15 11:38:17.462663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:19:40.007 [2024-07-15 11:38:17.476500] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407880) with pdu=0x2000190ff3c8 00:19:40.007 [2024-07-15 11:38:17.478431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:18140 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.007 [2024-07-15 11:38:17.478504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:19:40.269 [2024-07-15 11:38:17.485412] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407880) with pdu=0x2000190ff3c8 00:19:40.269 [2024-07-15 11:38:17.486333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:7661 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.269 [2024-07-15 11:38:17.486376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:19:40.269 [2024-07-15 11:38:17.500080] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407880) with pdu=0x2000190ddc00 00:19:40.269 [2024-07-15 11:38:17.501669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:2791 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.269 [2024-07-15 11:38:17.501714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:19:40.269 [2024-07-15 11:38:17.511560] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407880) with pdu=0x2000190e95a0 00:19:40.269 [2024-07-15 11:38:17.512879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:18795 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.269 [2024-07-15 11:38:17.512923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:19:40.269 [2024-07-15 11:38:17.523379] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407880) with pdu=0x2000190dece0 00:19:40.269 [2024-07-15 11:38:17.524689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:6151 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.269 [2024-07-15 11:38:17.524731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:19:40.269 [2024-07-15 11:38:17.535631] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407880) with pdu=0x2000190f6890 00:19:40.269 [2024-07-15 11:38:17.536416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:11338 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.269 [2024-07-15 11:38:17.536458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:19:40.269 [2024-07-15 11:38:17.547619] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407880) with pdu=0x2000190f4b08 00:19:40.269 [2024-07-15 11:38:17.548767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:6269 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.269 [2024-07-15 11:38:17.548812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:19:40.269 [2024-07-15 11:38:17.560997] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407880) with pdu=0x2000190edd58 00:19:40.269 [2024-07-15 11:38:17.562628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:19147 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.269 [2024-07-15 11:38:17.562673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:19:40.269 [2024-07-15 11:38:17.571369] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407880) with pdu=0x2000190e9168 00:19:40.269 [2024-07-15 11:38:17.573247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:2259 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.269 [2024-07-15 11:38:17.573290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:19:40.269 [2024-07-15 11:38:17.584192] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407880) with pdu=0x2000190fb8b8 00:19:40.269 [2024-07-15 11:38:17.585179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:16748 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.269 [2024-07-15 11:38:17.585220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:19:40.269 [2024-07-15 11:38:17.594988] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407880) with pdu=0x2000190e5ec8 00:19:40.269 [2024-07-15 11:38:17.596106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:14729 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.269 [2024-07-15 11:38:17.596145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:19:40.269 [2024-07-15 11:38:17.607249] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407880) with pdu=0x2000190eee38 00:19:40.269 [2024-07-15 11:38:17.608402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:23579 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.269 [2024-07-15 11:38:17.608447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:19:40.269 [2024-07-15 11:38:17.620899] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407880) with pdu=0x2000190ff3c8 00:19:40.269 [2024-07-15 11:38:17.622527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:13255 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.269 [2024-07-15 11:38:17.622581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:19:40.269 [2024-07-15 11:38:17.633030] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407880) with pdu=0x2000190e7c50 00:19:40.269 [2024-07-15 11:38:17.634648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:21604 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.269 [2024-07-15 11:38:17.634688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:19:40.269 [2024-07-15 11:38:17.643865] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407880) with pdu=0x2000190e6738 00:19:40.269 [2024-07-15 11:38:17.645222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24841 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.269 [2024-07-15 11:38:17.645264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:19:40.269 [2024-07-15 11:38:17.655564] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407880) with pdu=0x2000190e1f80 00:19:40.269 [2024-07-15 11:38:17.656862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:3747 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.269 [2024-07-15 11:38:17.656901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:19:40.269 [2024-07-15 11:38:17.667681] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407880) with pdu=0x2000190e3d08 00:19:40.269 [2024-07-15 11:38:17.668978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:15831 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.269 [2024-07-15 11:38:17.669018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:19:40.269 [2024-07-15 11:38:17.679398] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407880) with pdu=0x2000190dece0 00:19:40.269 [2024-07-15 11:38:17.680214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:8755 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.269 [2024-07-15 11:38:17.680254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:19:40.269 [2024-07-15 11:38:17.690376] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407880) with pdu=0x2000190ff3c8 00:19:40.269 [2024-07-15 11:38:17.691370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:1899 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.269 [2024-07-15 11:38:17.691412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:19:40.269 [2024-07-15 11:38:17.702608] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407880) with pdu=0x2000190e7818 00:19:40.269 [2024-07-15 11:38:17.703577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:15517 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.269 [2024-07-15 11:38:17.703616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:19:40.269 [2024-07-15 11:38:17.716823] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407880) with pdu=0x2000190f2948 00:19:40.269 [2024-07-15 11:38:17.718470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:13417 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.269 [2024-07-15 11:38:17.718519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:19:40.269 [2024-07-15 11:38:17.728147] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407880) with pdu=0x2000190dece0 00:19:40.269 [2024-07-15 11:38:17.729577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:15306 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.269 [2024-07-15 11:38:17.729623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:19:40.269 [2024-07-15 11:38:17.739991] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407880) with pdu=0x2000190e88f8 00:19:40.269 [2024-07-15 11:38:17.741324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:10379 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.269 [2024-07-15 11:38:17.741373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:19:40.529 [2024-07-15 11:38:17.752248] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407880) with pdu=0x2000190ea248 00:19:40.529 [2024-07-15 11:38:17.753275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:12082 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.529 [2024-07-15 11:38:17.753327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:19:40.529 [2024-07-15 11:38:17.767128] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407880) with pdu=0x2000190dfdc0 00:19:40.529 [2024-07-15 11:38:17.769213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:18258 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.529 [2024-07-15 11:38:17.769275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:19:40.529 [2024-07-15 11:38:17.776062] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407880) with pdu=0x2000190e3060 00:19:40.529 [2024-07-15 11:38:17.777105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:13324 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.529 [2024-07-15 11:38:17.777152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:19:40.529 [2024-07-15 11:38:17.790718] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407880) with pdu=0x2000190f20d8 00:19:40.529 [2024-07-15 11:38:17.792442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:3511 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.529 [2024-07-15 11:38:17.792492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:40.529 [2024-07-15 11:38:17.802059] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407880) with pdu=0x2000190fc998 00:19:40.529 [2024-07-15 11:38:17.803527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:3197 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.529 [2024-07-15 11:38:17.803583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:40.529 [2024-07-15 11:38:17.814114] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407880) with pdu=0x2000190f0350 00:19:40.529 [2024-07-15 11:38:17.815556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:19160 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.529 [2024-07-15 11:38:17.815599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:19:40.529 [2024-07-15 11:38:17.823759] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407880) with pdu=0x2000190f4f40 00:19:40.529 [2024-07-15 11:38:17.824495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:5578 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.529 [2024-07-15 11:38:17.824538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:19:40.530 [2024-07-15 11:38:17.838354] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407880) with pdu=0x2000190e6738 00:19:40.530 [2024-07-15 11:38:17.839622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:18349 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.530 [2024-07-15 11:38:17.839664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:19:40.530 [2024-07-15 11:38:17.849770] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407880) with pdu=0x2000190f96f8 00:19:40.530 [2024-07-15 11:38:17.850862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:5511 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.530 [2024-07-15 11:38:17.850905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:19:40.530 [2024-07-15 11:38:17.861659] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407880) with pdu=0x2000190fc128 00:19:40.530 [2024-07-15 11:38:17.862909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:4529 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.530 [2024-07-15 11:38:17.862952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:19:40.530 [2024-07-15 11:38:17.873814] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407880) with pdu=0x2000190f46d0 00:19:40.530 [2024-07-15 11:38:17.875053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:14789 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.530 [2024-07-15 11:38:17.875097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:19:40.530 [2024-07-15 11:38:17.887964] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407880) with pdu=0x2000190e4140 00:19:40.530 [2024-07-15 11:38:17.889904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:17562 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.530 [2024-07-15 11:38:17.889949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:19:40.530 [2024-07-15 11:38:17.896623] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407880) with pdu=0x2000190f1868 00:19:40.530 [2024-07-15 11:38:17.897535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:6216 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.530 [2024-07-15 11:38:17.897583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:19:40.530 [2024-07-15 11:38:17.911216] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407880) with pdu=0x2000190fa7d8 00:19:40.530 [2024-07-15 11:38:17.912900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:14083 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.530 [2024-07-15 11:38:17.912943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:19:40.530 [2024-07-15 11:38:17.922106] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407880) with pdu=0x2000190e23b8 00:19:40.530 [2024-07-15 11:38:17.923250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:20656 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.530 [2024-07-15 11:38:17.923300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:19:40.530 [2024-07-15 11:38:17.934998] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407880) with pdu=0x2000190e9168 00:19:40.530 [2024-07-15 11:38:17.936181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:25116 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.530 [2024-07-15 11:38:17.936234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:19:40.530 [2024-07-15 11:38:17.947316] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407880) with pdu=0x2000190de038 00:19:40.530 [2024-07-15 11:38:17.948836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22955 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.530 [2024-07-15 11:38:17.948883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:19:40.530 [2024-07-15 11:38:17.958856] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407880) with pdu=0x2000190fb480 00:19:40.530 [2024-07-15 11:38:17.960089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:13408 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.530 [2024-07-15 11:38:17.960134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:19:40.530 [2024-07-15 11:38:17.970723] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407880) with pdu=0x2000190f1ca0 00:19:40.530 [2024-07-15 11:38:17.971749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:7466 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.530 [2024-07-15 11:38:17.971789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:19:40.530 [2024-07-15 11:38:17.982003] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407880) with pdu=0x2000190fb048 00:19:40.530 [2024-07-15 11:38:17.982873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:19264 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.530 [2024-07-15 11:38:17.982920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:19:40.530 [2024-07-15 11:38:17.996058] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407880) with pdu=0x2000190ef270 00:19:40.530 [2024-07-15 11:38:17.997571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:11146 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.530 [2024-07-15 11:38:17.997619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:19:40.788 [2024-07-15 11:38:18.008003] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407880) with pdu=0x2000190de470 00:19:40.788 [2024-07-15 11:38:18.009360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:23507 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.788 [2024-07-15 11:38:18.009408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:19:40.788 [2024-07-15 11:38:18.020295] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407880) with pdu=0x2000190e95a0 00:19:40.788 [2024-07-15 11:38:18.021643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:21958 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.788 [2024-07-15 11:38:18.021686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:19:40.788 [2024-07-15 11:38:18.033996] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407880) with pdu=0x2000190f46d0 00:19:40.788 [2024-07-15 11:38:18.035849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:12932 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.788 [2024-07-15 11:38:18.035893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:19:40.788 [2024-07-15 11:38:18.046182] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407880) with pdu=0x2000190e49b0 00:19:40.788 [2024-07-15 11:38:18.048000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:19116 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.788 [2024-07-15 11:38:18.048043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:19:40.788 [2024-07-15 11:38:18.056072] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407880) with pdu=0x2000190fac10 00:19:40.788 [2024-07-15 11:38:18.056969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:25036 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.788 [2024-07-15 11:38:18.057009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:19:40.788 [2024-07-15 11:38:18.068517] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407880) with pdu=0x2000190f4f40 00:19:40.788 [2024-07-15 11:38:18.069924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:7942 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.788 [2024-07-15 11:38:18.069972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:19:40.788 [2024-07-15 11:38:18.081614] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407880) with pdu=0x2000190fb8b8 00:19:40.788 [2024-07-15 11:38:18.082983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:8150 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.788 [2024-07-15 11:38:18.083032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:19:40.788 [2024-07-15 11:38:18.095021] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407880) with pdu=0x2000190fac10 00:19:40.788 [2024-07-15 11:38:18.096304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:22734 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.788 [2024-07-15 11:38:18.096356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:19:40.788 [2024-07-15 11:38:18.106767] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407880) with pdu=0x2000190f92c0 00:19:40.788 [2024-07-15 11:38:18.107820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:4915 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.788 [2024-07-15 11:38:18.107868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:19:40.788 [2024-07-15 11:38:18.118597] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407880) with pdu=0x2000190ee5c8 00:19:40.788 [2024-07-15 11:38:18.119576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:6491 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.788 [2024-07-15 11:38:18.119640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:19:40.788 [2024-07-15 11:38:18.131530] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407880) with pdu=0x2000190fc560 00:19:40.788 [2024-07-15 11:38:18.132592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:1681 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.788 [2024-07-15 11:38:18.132641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:19:40.788 [2024-07-15 11:38:18.143617] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407880) with pdu=0x2000190ea248 00:19:40.788 [2024-07-15 11:38:18.144317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:8853 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.788 [2024-07-15 11:38:18.144362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:19:40.788 [2024-07-15 11:38:18.157277] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407880) with pdu=0x2000190e3060 00:19:40.788 [2024-07-15 11:38:18.158827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:14175 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.788 [2024-07-15 11:38:18.158870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:40.788 00:19:40.788 Latency(us) 00:19:40.788 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:40.788 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:19:40.788 nvme0n1 : 2.00 20904.30 81.66 0.00 0.00 6113.56 2487.39 15132.86 00:19:40.788 =================================================================================================================== 00:19:40.788 Total : 20904.30 81.66 0.00 0.00 6113.56 2487.39 15132.86 00:19:40.788 0 00:19:40.788 11:38:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:19:40.788 11:38:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:19:40.788 11:38:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:19:40.788 11:38:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:19:40.788 | .driver_specific 00:19:40.788 | .nvme_error 00:19:40.788 | .status_code 00:19:40.788 | .command_transient_transport_error' 00:19:41.046 11:38:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 164 > 0 )) 00:19:41.046 11:38:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 93745 00:19:41.046 11:38:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 93745 ']' 00:19:41.046 11:38:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 93745 00:19:41.046 11:38:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:19:41.046 11:38:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:41.046 11:38:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 93745 00:19:41.046 11:38:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:19:41.046 11:38:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:19:41.046 killing process with pid 93745 00:19:41.046 11:38:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 93745' 00:19:41.046 11:38:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 93745 00:19:41.047 Received shutdown signal, test time was about 2.000000 seconds 00:19:41.047 00:19:41.047 Latency(us) 00:19:41.047 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:41.047 =================================================================================================================== 00:19:41.047 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:41.047 11:38:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 93745 00:19:41.305 11:38:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:19:41.305 11:38:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:19:41.305 11:38:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:19:41.305 11:38:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:19:41.305 11:38:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:19:41.305 11:38:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=93822 00:19:41.305 11:38:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:19:41.305 11:38:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 93822 /var/tmp/bperf.sock 00:19:41.305 11:38:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 93822 ']' 00:19:41.305 11:38:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:19:41.305 11:38:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:41.305 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:19:41.305 11:38:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:19:41.305 11:38:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:41.305 11:38:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:19:41.305 [2024-07-15 11:38:18.681998] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:19:41.305 [2024-07-15 11:38:18.682098] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93822 ] 00:19:41.305 I/O size of 131072 is greater than zero copy threshold (65536). 00:19:41.305 Zero copy mechanism will not be used. 00:19:41.563 [2024-07-15 11:38:18.819425] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:41.563 [2024-07-15 11:38:18.878519] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:42.498 11:38:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:42.498 11:38:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:19:42.498 11:38:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:19:42.498 11:38:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:19:42.498 11:38:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:19:42.498 11:38:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:42.498 11:38:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:19:42.498 11:38:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:42.498 11:38:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:19:42.499 11:38:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:19:43.066 nvme0n1 00:19:43.066 11:38:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:19:43.066 11:38:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:43.066 11:38:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:19:43.066 11:38:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:43.066 11:38:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:19:43.066 11:38:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:19:43.066 I/O size of 131072 is greater than zero copy threshold (65536). 00:19:43.066 Zero copy mechanism will not be used. 00:19:43.066 Running I/O for 2 seconds... 00:19:43.066 [2024-07-15 11:38:20.397671] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407bc0) with pdu=0x2000190fef90 00:19:43.066 [2024-07-15 11:38:20.398020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.066 [2024-07-15 11:38:20.398063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:43.066 [2024-07-15 11:38:20.403426] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407bc0) with pdu=0x2000190fef90 00:19:43.066 [2024-07-15 11:38:20.403799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.066 [2024-07-15 11:38:20.403840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:43.066 [2024-07-15 11:38:20.408932] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407bc0) with pdu=0x2000190fef90 00:19:43.066 [2024-07-15 11:38:20.409247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.066 [2024-07-15 11:38:20.409290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:43.066 [2024-07-15 11:38:20.414498] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407bc0) with pdu=0x2000190fef90 00:19:43.066 [2024-07-15 11:38:20.414841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.066 [2024-07-15 11:38:20.414884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:43.066 [2024-07-15 11:38:20.419987] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407bc0) with pdu=0x2000190fef90 00:19:43.066 [2024-07-15 11:38:20.420315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.066 [2024-07-15 11:38:20.420360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:43.066 [2024-07-15 11:38:20.425428] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407bc0) with pdu=0x2000190fef90 00:19:43.066 [2024-07-15 11:38:20.425766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.066 [2024-07-15 11:38:20.425808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:43.066 [2024-07-15 11:38:20.430887] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407bc0) with pdu=0x2000190fef90 00:19:43.066 [2024-07-15 11:38:20.431213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.067 [2024-07-15 11:38:20.431256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:43.067 [2024-07-15 11:38:20.436287] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407bc0) with pdu=0x2000190fef90 00:19:43.067 [2024-07-15 11:38:20.436646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.067 [2024-07-15 11:38:20.436694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:43.067 [2024-07-15 11:38:20.441738] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407bc0) with pdu=0x2000190fef90 00:19:43.067 [2024-07-15 11:38:20.442080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.067 [2024-07-15 11:38:20.442122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:43.067 [2024-07-15 11:38:20.447264] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407bc0) with pdu=0x2000190fef90 00:19:43.067 [2024-07-15 11:38:20.447667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.067 [2024-07-15 11:38:20.447714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:43.067 [2024-07-15 11:38:20.453239] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407bc0) with pdu=0x2000190fef90 00:19:43.067 [2024-07-15 11:38:20.453613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.067 [2024-07-15 11:38:20.453658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:43.067 [2024-07-15 11:38:20.458778] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407bc0) with pdu=0x2000190fef90 00:19:43.067 [2024-07-15 11:38:20.459121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.067 [2024-07-15 11:38:20.459163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:43.067 [2024-07-15 11:38:20.464195] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407bc0) with pdu=0x2000190fef90 00:19:43.067 [2024-07-15 11:38:20.464523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.067 [2024-07-15 11:38:20.464576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:43.067 [2024-07-15 11:38:20.469712] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407bc0) with pdu=0x2000190fef90 00:19:43.067 [2024-07-15 11:38:20.470056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.067 [2024-07-15 11:38:20.470100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:43.067 [2024-07-15 11:38:20.475133] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407bc0) with pdu=0x2000190fef90 00:19:43.067 [2024-07-15 11:38:20.475453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.067 [2024-07-15 11:38:20.475495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:43.067 [2024-07-15 11:38:20.480486] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407bc0) with pdu=0x2000190fef90 00:19:43.067 [2024-07-15 11:38:20.480823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.067 [2024-07-15 11:38:20.480863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:43.067 [2024-07-15 11:38:20.485910] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407bc0) with pdu=0x2000190fef90 00:19:43.067 [2024-07-15 11:38:20.486230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.067 [2024-07-15 11:38:20.486270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:43.067 [2024-07-15 11:38:20.491321] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407bc0) with pdu=0x2000190fef90 00:19:43.067 [2024-07-15 11:38:20.491653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.067 [2024-07-15 11:38:20.491693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:43.067 [2024-07-15 11:38:20.496729] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407bc0) with pdu=0x2000190fef90 00:19:43.067 [2024-07-15 11:38:20.497056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.067 [2024-07-15 11:38:20.497098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:43.067 [2024-07-15 11:38:20.502190] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407bc0) with pdu=0x2000190fef90 00:19:43.067 [2024-07-15 11:38:20.502516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.067 [2024-07-15 11:38:20.502573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:43.067 [2024-07-15 11:38:20.507568] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407bc0) with pdu=0x2000190fef90 00:19:43.067 [2024-07-15 11:38:20.507893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.067 [2024-07-15 11:38:20.507937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:43.067 [2024-07-15 11:38:20.512965] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407bc0) with pdu=0x2000190fef90 00:19:43.067 [2024-07-15 11:38:20.513328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.067 [2024-07-15 11:38:20.513376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:43.067 [2024-07-15 11:38:20.518673] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407bc0) with pdu=0x2000190fef90 00:19:43.067 [2024-07-15 11:38:20.519109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.067 [2024-07-15 11:38:20.519158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:43.067 [2024-07-15 11:38:20.524395] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407bc0) with pdu=0x2000190fef90 00:19:43.067 [2024-07-15 11:38:20.524748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.067 [2024-07-15 11:38:20.524795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:43.067 [2024-07-15 11:38:20.529840] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407bc0) with pdu=0x2000190fef90 00:19:43.067 [2024-07-15 11:38:20.530193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.067 [2024-07-15 11:38:20.530235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:43.067 [2024-07-15 11:38:20.535279] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407bc0) with pdu=0x2000190fef90 00:19:43.067 [2024-07-15 11:38:20.535635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.067 [2024-07-15 11:38:20.535679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:43.067 [2024-07-15 11:38:20.540845] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407bc0) with pdu=0x2000190fef90 00:19:43.067 [2024-07-15 11:38:20.541181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.067 [2024-07-15 11:38:20.541227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:43.327 [2024-07-15 11:38:20.546347] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407bc0) with pdu=0x2000190fef90 00:19:43.327 [2024-07-15 11:38:20.546713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.327 [2024-07-15 11:38:20.546756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:43.327 [2024-07-15 11:38:20.551899] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407bc0) with pdu=0x2000190fef90 00:19:43.327 [2024-07-15 11:38:20.552255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.327 [2024-07-15 11:38:20.552299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:43.327 [2024-07-15 11:38:20.557426] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407bc0) with pdu=0x2000190fef90 00:19:43.327 [2024-07-15 11:38:20.557785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.327 [2024-07-15 11:38:20.557828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:43.327 [2024-07-15 11:38:20.562967] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407bc0) with pdu=0x2000190fef90 00:19:43.327 [2024-07-15 11:38:20.563313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.327 [2024-07-15 11:38:20.563358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:43.327 [2024-07-15 11:38:20.568447] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407bc0) with pdu=0x2000190fef90 00:19:43.327 [2024-07-15 11:38:20.568794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.327 [2024-07-15 11:38:20.568848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:43.327 [2024-07-15 11:38:20.573945] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407bc0) with pdu=0x2000190fef90 00:19:43.327 [2024-07-15 11:38:20.574300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.327 [2024-07-15 11:38:20.574344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:43.327 [2024-07-15 11:38:20.579474] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407bc0) with pdu=0x2000190fef90 00:19:43.327 [2024-07-15 11:38:20.579827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.327 [2024-07-15 11:38:20.579870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:43.327 [2024-07-15 11:38:20.585013] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407bc0) with pdu=0x2000190fef90 00:19:43.327 [2024-07-15 11:38:20.585339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.327 [2024-07-15 11:38:20.585382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:43.327 [2024-07-15 11:38:20.590561] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407bc0) with pdu=0x2000190fef90 00:19:43.328 [2024-07-15 11:38:20.590893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.328 [2024-07-15 11:38:20.590935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:43.328 [2024-07-15 11:38:20.596026] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407bc0) with pdu=0x2000190fef90 00:19:43.328 [2024-07-15 11:38:20.596373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.328 [2024-07-15 11:38:20.596416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:43.328 [2024-07-15 11:38:20.601465] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407bc0) with pdu=0x2000190fef90 00:19:43.328 [2024-07-15 11:38:20.601822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.328 [2024-07-15 11:38:20.601864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:43.328 [2024-07-15 11:38:20.607017] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407bc0) with pdu=0x2000190fef90 00:19:43.328 [2024-07-15 11:38:20.607367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.328 [2024-07-15 11:38:20.607410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:43.328 [2024-07-15 11:38:20.612434] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407bc0) with pdu=0x2000190fef90 00:19:43.328 [2024-07-15 11:38:20.612792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.328 [2024-07-15 11:38:20.612833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:43.328 [2024-07-15 11:38:20.617907] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407bc0) with pdu=0x2000190fef90 00:19:43.328 [2024-07-15 11:38:20.618249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.328 [2024-07-15 11:38:20.618295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:43.328 [2024-07-15 11:38:20.623287] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407bc0) with pdu=0x2000190fef90 00:19:43.328 [2024-07-15 11:38:20.623607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.328 [2024-07-15 11:38:20.623667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:43.328 [2024-07-15 11:38:20.628517] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407bc0) with pdu=0x2000190fef90 00:19:43.328 [2024-07-15 11:38:20.628844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.328 [2024-07-15 11:38:20.628886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:43.328 [2024-07-15 11:38:20.633775] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407bc0) with pdu=0x2000190fef90 00:19:43.328 [2024-07-15 11:38:20.634099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.328 [2024-07-15 11:38:20.634141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:43.328 [2024-07-15 11:38:20.639093] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407bc0) with pdu=0x2000190fef90 00:19:43.328 [2024-07-15 11:38:20.639403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.328 [2024-07-15 11:38:20.639446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:43.328 [2024-07-15 11:38:20.644381] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407bc0) with pdu=0x2000190fef90 00:19:43.328 [2024-07-15 11:38:20.644714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.328 [2024-07-15 11:38:20.644759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:43.328 [2024-07-15 11:38:20.649754] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407bc0) with pdu=0x2000190fef90 00:19:43.328 [2024-07-15 11:38:20.650183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.328 [2024-07-15 11:38:20.650236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:43.328 [2024-07-15 11:38:20.655099] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407bc0) with pdu=0x2000190fef90 00:19:43.328 [2024-07-15 11:38:20.655409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.328 [2024-07-15 11:38:20.655454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:43.328 [2024-07-15 11:38:20.660066] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407bc0) with pdu=0x2000190fef90 00:19:43.328 [2024-07-15 11:38:20.660362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.328 [2024-07-15 11:38:20.660403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:43.328 [2024-07-15 11:38:20.665356] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407bc0) with pdu=0x2000190fef90 00:19:43.328 [2024-07-15 11:38:20.665668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.328 [2024-07-15 11:38:20.665712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:43.328 [2024-07-15 11:38:20.670304] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407bc0) with pdu=0x2000190fef90 00:19:43.328 [2024-07-15 11:38:20.670598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.328 [2024-07-15 11:38:20.670637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:43.328 [2024-07-15 11:38:20.675132] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407bc0) with pdu=0x2000190fef90 00:19:43.328 [2024-07-15 11:38:20.675406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.328 [2024-07-15 11:38:20.675448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:43.328 [2024-07-15 11:38:20.679889] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407bc0) with pdu=0x2000190fef90 00:19:43.328 [2024-07-15 11:38:20.680148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.328 [2024-07-15 11:38:20.680189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:43.328 [2024-07-15 11:38:20.684760] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407bc0) with pdu=0x2000190fef90 00:19:43.328 [2024-07-15 11:38:20.685021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.328 [2024-07-15 11:38:20.685062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:43.328 [2024-07-15 11:38:20.689564] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407bc0) with pdu=0x2000190fef90 00:19:43.328 [2024-07-15 11:38:20.689828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.328 [2024-07-15 11:38:20.689867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:43.328 [2024-07-15 11:38:20.694423] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407bc0) with pdu=0x2000190fef90 00:19:43.328 [2024-07-15 11:38:20.694689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.328 [2024-07-15 11:38:20.694725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:43.328 [2024-07-15 11:38:20.699252] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407bc0) with pdu=0x2000190fef90 00:19:43.328 [2024-07-15 11:38:20.699570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.328 [2024-07-15 11:38:20.699622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:43.328 [2024-07-15 11:38:20.704273] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407bc0) with pdu=0x2000190fef90 00:19:43.328 [2024-07-15 11:38:20.704605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.328 [2024-07-15 11:38:20.704638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:43.328 [2024-07-15 11:38:20.709277] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407bc0) with pdu=0x2000190fef90 00:19:43.328 [2024-07-15 11:38:20.709590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.328 [2024-07-15 11:38:20.709622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:43.328 [2024-07-15 11:38:20.714229] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407bc0) with pdu=0x2000190fef90 00:19:43.328 [2024-07-15 11:38:20.714496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.328 [2024-07-15 11:38:20.714560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:43.328 [2024-07-15 11:38:20.719044] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407bc0) with pdu=0x2000190fef90 00:19:43.328 [2024-07-15 11:38:20.719300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.328 [2024-07-15 11:38:20.719346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:43.328 [2024-07-15 11:38:20.724081] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407bc0) with pdu=0x2000190fef90 00:19:43.328 [2024-07-15 11:38:20.724326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.328 [2024-07-15 11:38:20.724363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:43.328 [2024-07-15 11:38:20.728812] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407bc0) with pdu=0x2000190fef90 00:19:43.328 [2024-07-15 11:38:20.729058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.328 [2024-07-15 11:38:20.729096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:43.328 [2024-07-15 11:38:20.733439] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407bc0) with pdu=0x2000190fef90 00:19:43.328 [2024-07-15 11:38:20.733702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.329 [2024-07-15 11:38:20.733737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:43.329 [2024-07-15 11:38:20.738105] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407bc0) with pdu=0x2000190fef90 00:19:43.329 [2024-07-15 11:38:20.738344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.329 [2024-07-15 11:38:20.738381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:43.329 [2024-07-15 11:38:20.742822] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407bc0) with pdu=0x2000190fef90 00:19:43.329 [2024-07-15 11:38:20.743065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.329 [2024-07-15 11:38:20.743104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:43.329 [2024-07-15 11:38:20.747483] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407bc0) with pdu=0x2000190fef90 00:19:43.329 [2024-07-15 11:38:20.747745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.329 [2024-07-15 11:38:20.747796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:43.329 [2024-07-15 11:38:20.752156] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407bc0) with pdu=0x2000190fef90 00:19:43.329 [2024-07-15 11:38:20.752395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.329 [2024-07-15 11:38:20.752429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:43.329 [2024-07-15 11:38:20.756894] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407bc0) with pdu=0x2000190fef90 00:19:43.329 [2024-07-15 11:38:20.757134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.329 [2024-07-15 11:38:20.757169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:43.329 [2024-07-15 11:38:20.761519] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407bc0) with pdu=0x2000190fef90 00:19:43.329 [2024-07-15 11:38:20.761773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.329 [2024-07-15 11:38:20.761809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:43.329 [2024-07-15 11:38:20.766192] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407bc0) with pdu=0x2000190fef90 00:19:43.329 [2024-07-15 11:38:20.766431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.329 [2024-07-15 11:38:20.766467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:43.329 [2024-07-15 11:38:20.770871] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407bc0) with pdu=0x2000190fef90 00:19:43.329 [2024-07-15 11:38:20.771111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.329 [2024-07-15 11:38:20.771155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:43.329 [2024-07-15 11:38:20.775517] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407bc0) with pdu=0x2000190fef90 00:19:43.329 [2024-07-15 11:38:20.775772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.329 [2024-07-15 11:38:20.775807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:43.329 [2024-07-15 11:38:20.780147] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407bc0) with pdu=0x2000190fef90 00:19:43.329 [2024-07-15 11:38:20.780395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.329 [2024-07-15 11:38:20.780433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:43.329 [2024-07-15 11:38:20.784773] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407bc0) with pdu=0x2000190fef90 00:19:43.329 [2024-07-15 11:38:20.785021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.329 [2024-07-15 11:38:20.785065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:43.329 [2024-07-15 11:38:20.789430] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407bc0) with pdu=0x2000190fef90 00:19:43.329 [2024-07-15 11:38:20.789686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.329 [2024-07-15 11:38:20.789731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:43.329 [2024-07-15 11:38:20.794111] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407bc0) with pdu=0x2000190fef90 00:19:43.329 [2024-07-15 11:38:20.794365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.329 [2024-07-15 11:38:20.794401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:43.329 [2024-07-15 11:38:20.798801] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407bc0) with pdu=0x2000190fef90 00:19:43.329 [2024-07-15 11:38:20.799053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.329 [2024-07-15 11:38:20.799093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:43.590 [2024-07-15 11:38:20.803574] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407bc0) with pdu=0x2000190fef90 00:19:43.590 [2024-07-15 11:38:20.803866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.590 [2024-07-15 11:38:20.803897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:43.590 [2024-07-15 11:38:20.808355] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407bc0) with pdu=0x2000190fef90 00:19:43.590 [2024-07-15 11:38:20.808664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.590 [2024-07-15 11:38:20.808696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:43.590 [2024-07-15 11:38:20.813202] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407bc0) with pdu=0x2000190fef90 00:19:43.590 [2024-07-15 11:38:20.813485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.590 [2024-07-15 11:38:20.813516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:43.590 [2024-07-15 11:38:20.818007] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407bc0) with pdu=0x2000190fef90 00:19:43.590 [2024-07-15 11:38:20.818276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.590 [2024-07-15 11:38:20.818305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:43.590 [2024-07-15 11:38:20.822705] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407bc0) with pdu=0x2000190fef90 00:19:43.590 [2024-07-15 11:38:20.822947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.590 [2024-07-15 11:38:20.822985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:43.590 [2024-07-15 11:38:20.827330] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407bc0) with pdu=0x2000190fef90 00:19:43.590 [2024-07-15 11:38:20.827595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.590 [2024-07-15 11:38:20.827631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:43.590 [2024-07-15 11:38:20.832136] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407bc0) with pdu=0x2000190fef90 00:19:43.590 [2024-07-15 11:38:20.832386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.590 [2024-07-15 11:38:20.832423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:43.590 [2024-07-15 11:38:20.836744] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407bc0) with pdu=0x2000190fef90 00:19:43.590 [2024-07-15 11:38:20.836974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.590 [2024-07-15 11:38:20.837011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:43.590 [2024-07-15 11:38:20.841433] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407bc0) with pdu=0x2000190fef90 00:19:43.590 [2024-07-15 11:38:20.841675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.590 [2024-07-15 11:38:20.841711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:43.590 [2024-07-15 11:38:20.846119] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407bc0) with pdu=0x2000190fef90 00:19:43.590 [2024-07-15 11:38:20.846345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.590 [2024-07-15 11:38:20.846381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:43.590 [2024-07-15 11:38:20.850808] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407bc0) with pdu=0x2000190fef90 00:19:43.590 [2024-07-15 11:38:20.851037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.590 [2024-07-15 11:38:20.851073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:43.590 [2024-07-15 11:38:20.855491] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407bc0) with pdu=0x2000190fef90 00:19:43.590 [2024-07-15 11:38:20.855740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.590 [2024-07-15 11:38:20.855777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:43.590 [2024-07-15 11:38:20.860234] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407bc0) with pdu=0x2000190fef90 00:19:43.590 [2024-07-15 11:38:20.860462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.590 [2024-07-15 11:38:20.860498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:43.590 [2024-07-15 11:38:20.864944] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407bc0) with pdu=0x2000190fef90 00:19:43.590 [2024-07-15 11:38:20.865172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.591 [2024-07-15 11:38:20.865207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:43.591 [2024-07-15 11:38:20.869589] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407bc0) with pdu=0x2000190fef90 00:19:43.591 [2024-07-15 11:38:20.869831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.591 [2024-07-15 11:38:20.869865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:43.591 [2024-07-15 11:38:20.874238] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407bc0) with pdu=0x2000190fef90 00:19:43.591 [2024-07-15 11:38:20.874470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.591 [2024-07-15 11:38:20.874506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:43.591 [2024-07-15 11:38:20.878963] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407bc0) with pdu=0x2000190fef90 00:19:43.591 [2024-07-15 11:38:20.879182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.591 [2024-07-15 11:38:20.879219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:43.591 [2024-07-15 11:38:20.883664] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407bc0) with pdu=0x2000190fef90 00:19:43.591 [2024-07-15 11:38:20.883885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.591 [2024-07-15 11:38:20.883919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:43.591 [2024-07-15 11:38:20.888288] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407bc0) with pdu=0x2000190fef90 00:19:43.591 [2024-07-15 11:38:20.888507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.591 [2024-07-15 11:38:20.888563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:43.591 [2024-07-15 11:38:20.892931] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407bc0) with pdu=0x2000190fef90 00:19:43.591 [2024-07-15 11:38:20.893150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.591 [2024-07-15 11:38:20.893183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:43.591 [2024-07-15 11:38:20.897612] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407bc0) with pdu=0x2000190fef90 00:19:43.591 [2024-07-15 11:38:20.897831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.591 [2024-07-15 11:38:20.897864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:43.591 [2024-07-15 11:38:20.902277] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407bc0) with pdu=0x2000190fef90 00:19:43.591 [2024-07-15 11:38:20.902495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.591 [2024-07-15 11:38:20.902529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:43.591 [2024-07-15 11:38:20.906974] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407bc0) with pdu=0x2000190fef90 00:19:43.591 [2024-07-15 11:38:20.907196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.591 [2024-07-15 11:38:20.907232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:43.591 [2024-07-15 11:38:20.911622] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407bc0) with pdu=0x2000190fef90 00:19:43.591 [2024-07-15 11:38:20.911845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.591 [2024-07-15 11:38:20.911881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:43.591 [2024-07-15 11:38:20.916252] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407bc0) with pdu=0x2000190fef90 00:19:43.591 [2024-07-15 11:38:20.916482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.591 [2024-07-15 11:38:20.916516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:43.591 [2024-07-15 11:38:20.920957] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407bc0) with pdu=0x2000190fef90 00:19:43.591 [2024-07-15 11:38:20.921181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.591 [2024-07-15 11:38:20.921216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:43.591 [2024-07-15 11:38:20.925693] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407bc0) with pdu=0x2000190fef90 00:19:43.591 [2024-07-15 11:38:20.925921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.591 [2024-07-15 11:38:20.925954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:43.591 [2024-07-15 11:38:20.930333] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407bc0) with pdu=0x2000190fef90 00:19:43.591 [2024-07-15 11:38:20.930580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.591 [2024-07-15 11:38:20.930614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:43.591 [2024-07-15 11:38:20.935044] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407bc0) with pdu=0x2000190fef90 00:19:43.591 [2024-07-15 11:38:20.935272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.591 [2024-07-15 11:38:20.935307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:43.591 [2024-07-15 11:38:20.939653] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407bc0) with pdu=0x2000190fef90 00:19:43.591 [2024-07-15 11:38:20.939872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.591 [2024-07-15 11:38:20.939907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:43.591 [2024-07-15 11:38:20.944285] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407bc0) with pdu=0x2000190fef90 00:19:43.591 [2024-07-15 11:38:20.944501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.591 [2024-07-15 11:38:20.944538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:43.591 [2024-07-15 11:38:20.948910] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407bc0) with pdu=0x2000190fef90 00:19:43.591 [2024-07-15 11:38:20.949132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.591 [2024-07-15 11:38:20.949168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:43.591 [2024-07-15 11:38:20.953526] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407bc0) with pdu=0x2000190fef90 00:19:43.591 [2024-07-15 11:38:20.953767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.591 [2024-07-15 11:38:20.953803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:43.591 [2024-07-15 11:38:20.958213] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407bc0) with pdu=0x2000190fef90 00:19:43.591 [2024-07-15 11:38:20.958433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.591 [2024-07-15 11:38:20.958468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:43.591 [2024-07-15 11:38:20.963311] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407bc0) with pdu=0x2000190fef90 00:19:43.591 [2024-07-15 11:38:20.963540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.591 [2024-07-15 11:38:20.963587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:43.591 [2024-07-15 11:38:20.968039] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407bc0) with pdu=0x2000190fef90 00:19:43.591 [2024-07-15 11:38:20.968261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.591 [2024-07-15 11:38:20.968296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:43.591 [2024-07-15 11:38:20.972673] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407bc0) with pdu=0x2000190fef90 00:19:43.591 [2024-07-15 11:38:20.972967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.591 [2024-07-15 11:38:20.973027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:43.591 [2024-07-15 11:38:20.977680] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407bc0) with pdu=0x2000190fef90 00:19:43.591 [2024-07-15 11:38:20.977953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.591 [2024-07-15 11:38:20.977995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:43.591 [2024-07-15 11:38:20.982631] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407bc0) with pdu=0x2000190fef90 00:19:43.591 [2024-07-15 11:38:20.982858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.591 [2024-07-15 11:38:20.982902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:43.591 [2024-07-15 11:38:20.987829] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407bc0) with pdu=0x2000190fef90 00:19:43.591 [2024-07-15 11:38:20.988050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.591 [2024-07-15 11:38:20.988081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:43.591 [2024-07-15 11:38:20.992534] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407bc0) with pdu=0x2000190fef90 00:19:43.591 [2024-07-15 11:38:20.992783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.592 [2024-07-15 11:38:20.992849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:43.592 [2024-07-15 11:38:20.997233] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407bc0) with pdu=0x2000190fef90 00:19:43.592 [2024-07-15 11:38:20.997456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.592 [2024-07-15 11:38:20.997482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:43.592 [2024-07-15 11:38:21.001873] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407bc0) with pdu=0x2000190fef90 00:19:43.592 [2024-07-15 11:38:21.002111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.592 [2024-07-15 11:38:21.002153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:43.592 [2024-07-15 11:38:21.006635] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407bc0) with pdu=0x2000190fef90 00:19:43.592 [2024-07-15 11:38:21.006855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.592 [2024-07-15 11:38:21.006890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:43.592 [2024-07-15 11:38:21.011346] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407bc0) with pdu=0x2000190fef90 00:19:43.592 [2024-07-15 11:38:21.011568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.592 [2024-07-15 11:38:21.011603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:43.592 [2024-07-15 11:38:21.016104] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407bc0) with pdu=0x2000190fef90 00:19:43.592 [2024-07-15 11:38:21.016315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.592 [2024-07-15 11:38:21.016341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:43.592 [2024-07-15 11:38:21.020825] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407bc0) with pdu=0x2000190fef90 00:19:43.592 [2024-07-15 11:38:21.021040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.592 [2024-07-15 11:38:21.021079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:43.592 [2024-07-15 11:38:21.025502] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407bc0) with pdu=0x2000190fef90 00:19:43.592 [2024-07-15 11:38:21.025729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.592 [2024-07-15 11:38:21.025767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:43.592 [2024-07-15 11:38:21.030299] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407bc0) with pdu=0x2000190fef90 00:19:43.592 [2024-07-15 11:38:21.030523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.592 [2024-07-15 11:38:21.030572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:43.592 [2024-07-15 11:38:21.035000] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407bc0) with pdu=0x2000190fef90 00:19:43.592 [2024-07-15 11:38:21.035227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.592 [2024-07-15 11:38:21.035264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:43.592 [2024-07-15 11:38:21.039705] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407bc0) with pdu=0x2000190fef90 00:19:43.592 [2024-07-15 11:38:21.039917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.592 [2024-07-15 11:38:21.039953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:43.592 [2024-07-15 11:38:21.045627] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407bc0) with pdu=0x2000190fef90 00:19:43.592 [2024-07-15 11:38:21.045968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.592 [2024-07-15 11:38:21.045998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:43.592 [2024-07-15 11:38:21.052928] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407bc0) with pdu=0x2000190fef90 00:19:43.592 [2024-07-15 11:38:21.053217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.592 [2024-07-15 11:38:21.053244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:43.592 [2024-07-15 11:38:21.059686] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407bc0) with pdu=0x2000190fef90 00:19:43.592 [2024-07-15 11:38:21.059948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.592 [2024-07-15 11:38:21.059981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:43.851 [2024-07-15 11:38:21.065558] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407bc0) with pdu=0x2000190fef90 00:19:43.851 [2024-07-15 11:38:21.065786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.851 [2024-07-15 11:38:21.065821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:43.851 [2024-07-15 11:38:21.070361] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407bc0) with pdu=0x2000190fef90 00:19:43.851 [2024-07-15 11:38:21.070598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.851 [2024-07-15 11:38:21.070626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:43.851 [2024-07-15 11:38:21.075130] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407bc0) with pdu=0x2000190fef90 00:19:43.851 [2024-07-15 11:38:21.075361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.851 [2024-07-15 11:38:21.075388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:43.851 [2024-07-15 11:38:21.079846] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407bc0) with pdu=0x2000190fef90 00:19:43.851 [2024-07-15 11:38:21.080072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.851 [2024-07-15 11:38:21.080109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:43.852 [2024-07-15 11:38:21.085016] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407bc0) with pdu=0x2000190fef90 00:19:43.852 [2024-07-15 11:38:21.085231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.852 [2024-07-15 11:38:21.085262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:43.852 [2024-07-15 11:38:21.089870] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407bc0) with pdu=0x2000190fef90 00:19:43.852 [2024-07-15 11:38:21.090118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.852 [2024-07-15 11:38:21.090154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:43.852 [2024-07-15 11:38:21.094565] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407bc0) with pdu=0x2000190fef90 00:19:43.852 [2024-07-15 11:38:21.094790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.852 [2024-07-15 11:38:21.094831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:43.852 [2024-07-15 11:38:21.099334] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407bc0) with pdu=0x2000190fef90 00:19:43.852 [2024-07-15 11:38:21.099561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.852 [2024-07-15 11:38:21.099601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:43.852 [2024-07-15 11:38:21.104054] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407bc0) with pdu=0x2000190fef90 00:19:43.852 [2024-07-15 11:38:21.104264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.852 [2024-07-15 11:38:21.104306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:43.852 [2024-07-15 11:38:21.108836] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407bc0) with pdu=0x2000190fef90 00:19:43.852 [2024-07-15 11:38:21.109045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.852 [2024-07-15 11:38:21.109090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:43.852 [2024-07-15 11:38:21.113513] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407bc0) with pdu=0x2000190fef90 00:19:43.852 [2024-07-15 11:38:21.113741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.852 [2024-07-15 11:38:21.113779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:43.852 [2024-07-15 11:38:21.118258] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407bc0) with pdu=0x2000190fef90 00:19:43.852 [2024-07-15 11:38:21.118481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.852 [2024-07-15 11:38:21.118521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:43.852 [2024-07-15 11:38:21.123035] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407bc0) with pdu=0x2000190fef90 00:19:43.852 [2024-07-15 11:38:21.123259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.852 [2024-07-15 11:38:21.123293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:43.852 [2024-07-15 11:38:21.127784] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407bc0) with pdu=0x2000190fef90 00:19:43.852 [2024-07-15 11:38:21.127997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.852 [2024-07-15 11:38:21.128024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:43.852 [2024-07-15 11:38:21.132467] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407bc0) with pdu=0x2000190fef90 00:19:43.852 [2024-07-15 11:38:21.132705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.852 [2024-07-15 11:38:21.132733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:43.852 [2024-07-15 11:38:21.137169] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407bc0) with pdu=0x2000190fef90 00:19:43.852 [2024-07-15 11:38:21.137396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.852 [2024-07-15 11:38:21.137422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:43.852 [2024-07-15 11:38:21.141911] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407bc0) with pdu=0x2000190fef90 00:19:43.852 [2024-07-15 11:38:21.142139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.852 [2024-07-15 11:38:21.142176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:43.852 [2024-07-15 11:38:21.146739] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407bc0) with pdu=0x2000190fef90 00:19:43.852 [2024-07-15 11:38:21.146959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.852 [2024-07-15 11:38:21.146986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:43.852 [2024-07-15 11:38:21.151425] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407bc0) with pdu=0x2000190fef90 00:19:43.852 [2024-07-15 11:38:21.151650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.852 [2024-07-15 11:38:21.151678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:43.852 [2024-07-15 11:38:21.156159] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407bc0) with pdu=0x2000190fef90 00:19:43.852 [2024-07-15 11:38:21.156393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.852 [2024-07-15 11:38:21.156421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:43.852 [2024-07-15 11:38:21.160982] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407bc0) with pdu=0x2000190fef90 00:19:43.852 [2024-07-15 11:38:21.161198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.852 [2024-07-15 11:38:21.161229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:43.852 [2024-07-15 11:38:21.165674] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407bc0) with pdu=0x2000190fef90 00:19:43.852 [2024-07-15 11:38:21.165915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.852 [2024-07-15 11:38:21.165956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:43.852 [2024-07-15 11:38:21.170492] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407bc0) with pdu=0x2000190fef90 00:19:43.852 [2024-07-15 11:38:21.170725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.852 [2024-07-15 11:38:21.170762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:43.852 [2024-07-15 11:38:21.175351] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407bc0) with pdu=0x2000190fef90 00:19:43.852 [2024-07-15 11:38:21.175577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.852 [2024-07-15 11:38:21.175612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:43.852 [2024-07-15 11:38:21.180200] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407bc0) with pdu=0x2000190fef90 00:19:43.852 [2024-07-15 11:38:21.180493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.852 [2024-07-15 11:38:21.180522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:43.852 [2024-07-15 11:38:21.186413] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407bc0) with pdu=0x2000190fef90 00:19:43.852 [2024-07-15 11:38:21.186671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.852 [2024-07-15 11:38:21.186701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:43.852 [2024-07-15 11:38:21.191341] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407bc0) with pdu=0x2000190fef90 00:19:43.852 [2024-07-15 11:38:21.191577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.852 [2024-07-15 11:38:21.191614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:43.852 [2024-07-15 11:38:21.196129] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407bc0) with pdu=0x2000190fef90 00:19:43.852 [2024-07-15 11:38:21.196361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.852 [2024-07-15 11:38:21.196402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:43.852 [2024-07-15 11:38:21.202500] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407bc0) with pdu=0x2000190fef90 00:19:43.852 [2024-07-15 11:38:21.202782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.853 [2024-07-15 11:38:21.202822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:43.853 [2024-07-15 11:38:21.209381] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407bc0) with pdu=0x2000190fef90 00:19:43.853 [2024-07-15 11:38:21.209650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.853 [2024-07-15 11:38:21.209691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:43.853 [2024-07-15 11:38:21.216717] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407bc0) with pdu=0x2000190fef90 00:19:43.853 [2024-07-15 11:38:21.216979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.853 [2024-07-15 11:38:21.217019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:43.853 [2024-07-15 11:38:21.223862] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407bc0) with pdu=0x2000190fef90 00:19:43.853 [2024-07-15 11:38:21.224132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.853 [2024-07-15 11:38:21.224175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:43.853 [2024-07-15 11:38:21.231256] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407bc0) with pdu=0x2000190fef90 00:19:43.853 [2024-07-15 11:38:21.231519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.853 [2024-07-15 11:38:21.231576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:43.853 [2024-07-15 11:38:21.238194] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407bc0) with pdu=0x2000190fef90 00:19:43.853 [2024-07-15 11:38:21.238453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.853 [2024-07-15 11:38:21.238496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:43.853 [2024-07-15 11:38:21.245140] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407bc0) with pdu=0x2000190fef90 00:19:43.853 [2024-07-15 11:38:21.245400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.853 [2024-07-15 11:38:21.245440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:43.853 [2024-07-15 11:38:21.252136] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407bc0) with pdu=0x2000190fef90 00:19:43.853 [2024-07-15 11:38:21.252391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.853 [2024-07-15 11:38:21.252434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:43.853 [2024-07-15 11:38:21.259125] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407bc0) with pdu=0x2000190fef90 00:19:43.853 [2024-07-15 11:38:21.259465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.853 [2024-07-15 11:38:21.259508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:43.853 [2024-07-15 11:38:21.266333] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407bc0) with pdu=0x2000190fef90 00:19:43.853 [2024-07-15 11:38:21.266620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.853 [2024-07-15 11:38:21.266659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:43.853 [2024-07-15 11:38:21.273209] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407bc0) with pdu=0x2000190fef90 00:19:43.853 [2024-07-15 11:38:21.273465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.853 [2024-07-15 11:38:21.273511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:43.853 [2024-07-15 11:38:21.280102] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407bc0) with pdu=0x2000190fef90 00:19:43.853 [2024-07-15 11:38:21.280354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.853 [2024-07-15 11:38:21.280396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:43.853 [2024-07-15 11:38:21.287103] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407bc0) with pdu=0x2000190fef90 00:19:43.853 [2024-07-15 11:38:21.287362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.853 [2024-07-15 11:38:21.287404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:43.853 [2024-07-15 11:38:21.294066] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407bc0) with pdu=0x2000190fef90 00:19:43.853 [2024-07-15 11:38:21.294322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.853 [2024-07-15 11:38:21.294358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:43.853 [2024-07-15 11:38:21.301000] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407bc0) with pdu=0x2000190fef90 00:19:43.853 [2024-07-15 11:38:21.301245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.853 [2024-07-15 11:38:21.301285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:43.853 [2024-07-15 11:38:21.307444] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407bc0) with pdu=0x2000190fef90 00:19:43.853 [2024-07-15 11:38:21.307740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.853 [2024-07-15 11:38:21.307777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:43.853 [2024-07-15 11:38:21.314193] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407bc0) with pdu=0x2000190fef90 00:19:43.853 [2024-07-15 11:38:21.314454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.853 [2024-07-15 11:38:21.314491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:43.853 [2024-07-15 11:38:21.320828] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407bc0) with pdu=0x2000190fef90 00:19:43.853 [2024-07-15 11:38:21.321082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.853 [2024-07-15 11:38:21.321123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:44.113 [2024-07-15 11:38:21.330329] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407bc0) with pdu=0x2000190fef90 00:19:44.113 [2024-07-15 11:38:21.330663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:44.113 [2024-07-15 11:38:21.330707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:44.113 [2024-07-15 11:38:21.339191] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407bc0) with pdu=0x2000190fef90 00:19:44.113 [2024-07-15 11:38:21.339440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:44.113 [2024-07-15 11:38:21.339479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:44.113 [2024-07-15 11:38:21.344756] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407bc0) with pdu=0x2000190fef90 00:19:44.113 [2024-07-15 11:38:21.345037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:44.113 [2024-07-15 11:38:21.345077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:44.113 [2024-07-15 11:38:21.350278] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407bc0) with pdu=0x2000190fef90 00:19:44.113 [2024-07-15 11:38:21.350502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:44.113 [2024-07-15 11:38:21.350541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:44.113 [2024-07-15 11:38:21.355697] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407bc0) with pdu=0x2000190fef90 00:19:44.113 [2024-07-15 11:38:21.355929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:44.113 [2024-07-15 11:38:21.355970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:44.113 [2024-07-15 11:38:21.361079] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407bc0) with pdu=0x2000190fef90 00:19:44.113 [2024-07-15 11:38:21.361298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:44.113 [2024-07-15 11:38:21.361334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:44.113 [2024-07-15 11:38:21.366482] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407bc0) with pdu=0x2000190fef90 00:19:44.113 [2024-07-15 11:38:21.366720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:44.113 [2024-07-15 11:38:21.366756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:44.113 [2024-07-15 11:38:21.371995] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407bc0) with pdu=0x2000190fef90 00:19:44.113 [2024-07-15 11:38:21.372223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:44.113 [2024-07-15 11:38:21.372263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:44.113 [2024-07-15 11:38:21.377370] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407bc0) with pdu=0x2000190fef90 00:19:44.113 [2024-07-15 11:38:21.377604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:44.113 [2024-07-15 11:38:21.377641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:44.113 [2024-07-15 11:38:21.382915] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407bc0) with pdu=0x2000190fef90 00:19:44.113 [2024-07-15 11:38:21.383177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:44.113 [2024-07-15 11:38:21.383218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:44.113 [2024-07-15 11:38:21.388421] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407bc0) with pdu=0x2000190fef90 00:19:44.113 [2024-07-15 11:38:21.388670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:44.113 [2024-07-15 11:38:21.388708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:44.113 [2024-07-15 11:38:21.393918] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407bc0) with pdu=0x2000190fef90 00:19:44.113 [2024-07-15 11:38:21.394145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:44.113 [2024-07-15 11:38:21.394184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:44.113 [2024-07-15 11:38:21.399455] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407bc0) with pdu=0x2000190fef90 00:19:44.113 [2024-07-15 11:38:21.399693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:44.113 [2024-07-15 11:38:21.399732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:44.113 [2024-07-15 11:38:21.404924] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407bc0) with pdu=0x2000190fef90 00:19:44.113 [2024-07-15 11:38:21.405142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:44.113 [2024-07-15 11:38:21.405174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:44.113 [2024-07-15 11:38:21.410257] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407bc0) with pdu=0x2000190fef90 00:19:44.113 [2024-07-15 11:38:21.410481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:44.113 [2024-07-15 11:38:21.410516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:44.113 [2024-07-15 11:38:21.415712] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407bc0) with pdu=0x2000190fef90 00:19:44.113 [2024-07-15 11:38:21.415950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:44.113 [2024-07-15 11:38:21.415989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:44.113 [2024-07-15 11:38:21.421176] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407bc0) with pdu=0x2000190fef90 00:19:44.113 [2024-07-15 11:38:21.421398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:44.113 [2024-07-15 11:38:21.421440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:44.113 [2024-07-15 11:38:21.426629] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407bc0) with pdu=0x2000190fef90 00:19:44.113 [2024-07-15 11:38:21.426862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:44.114 [2024-07-15 11:38:21.426905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:44.114 [2024-07-15 11:38:21.432128] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407bc0) with pdu=0x2000190fef90 00:19:44.114 [2024-07-15 11:38:21.432368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:44.114 [2024-07-15 11:38:21.432406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:44.114 [2024-07-15 11:38:21.437481] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407bc0) with pdu=0x2000190fef90 00:19:44.114 [2024-07-15 11:38:21.437727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:44.114 [2024-07-15 11:38:21.437778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:44.114 [2024-07-15 11:38:21.443010] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407bc0) with pdu=0x2000190fef90 00:19:44.114 [2024-07-15 11:38:21.443234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:44.114 [2024-07-15 11:38:21.443271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:44.114 [2024-07-15 11:38:21.448382] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407bc0) with pdu=0x2000190fef90 00:19:44.114 [2024-07-15 11:38:21.448612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:44.114 [2024-07-15 11:38:21.448642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:44.114 [2024-07-15 11:38:21.453993] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407bc0) with pdu=0x2000190fef90 00:19:44.114 [2024-07-15 11:38:21.454223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:44.114 [2024-07-15 11:38:21.454257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:44.114 [2024-07-15 11:38:21.459455] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407bc0) with pdu=0x2000190fef90 00:19:44.114 [2024-07-15 11:38:21.459691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:44.114 [2024-07-15 11:38:21.459718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:44.114 [2024-07-15 11:38:21.464863] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407bc0) with pdu=0x2000190fef90 00:19:44.114 [2024-07-15 11:38:21.465084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:44.114 [2024-07-15 11:38:21.465120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:44.114 [2024-07-15 11:38:21.470273] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407bc0) with pdu=0x2000190fef90 00:19:44.114 [2024-07-15 11:38:21.470507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:44.114 [2024-07-15 11:38:21.470555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:44.114 [2024-07-15 11:38:21.475695] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407bc0) with pdu=0x2000190fef90 00:19:44.114 [2024-07-15 11:38:21.475912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:44.114 [2024-07-15 11:38:21.475946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:44.114 [2024-07-15 11:38:21.481170] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407bc0) with pdu=0x2000190fef90 00:19:44.114 [2024-07-15 11:38:21.481395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:44.114 [2024-07-15 11:38:21.481430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:44.114 [2024-07-15 11:38:21.486623] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407bc0) with pdu=0x2000190fef90 00:19:44.114 [2024-07-15 11:38:21.486865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:44.114 [2024-07-15 11:38:21.486900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:44.114 [2024-07-15 11:38:21.492088] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407bc0) with pdu=0x2000190fef90 00:19:44.114 [2024-07-15 11:38:21.492315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:44.114 [2024-07-15 11:38:21.492342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:44.114 [2024-07-15 11:38:21.497570] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407bc0) with pdu=0x2000190fef90 00:19:44.114 [2024-07-15 11:38:21.497798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:44.114 [2024-07-15 11:38:21.497824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:44.114 [2024-07-15 11:38:21.504046] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407bc0) with pdu=0x2000190fef90 00:19:44.114 [2024-07-15 11:38:21.504311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:44.114 [2024-07-15 11:38:21.504346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:44.114 [2024-07-15 11:38:21.510219] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407bc0) with pdu=0x2000190fef90 00:19:44.114 [2024-07-15 11:38:21.510444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:44.114 [2024-07-15 11:38:21.510484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:44.114 [2024-07-15 11:38:21.515696] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407bc0) with pdu=0x2000190fef90 00:19:44.114 [2024-07-15 11:38:21.515930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:44.114 [2024-07-15 11:38:21.515967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:44.114 [2024-07-15 11:38:21.521147] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407bc0) with pdu=0x2000190fef90 00:19:44.114 [2024-07-15 11:38:21.521368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:44.114 [2024-07-15 11:38:21.521413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:44.114 [2024-07-15 11:38:21.526619] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407bc0) with pdu=0x2000190fef90 00:19:44.114 [2024-07-15 11:38:21.526839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:44.114 [2024-07-15 11:38:21.526875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:44.114 [2024-07-15 11:38:21.532167] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407bc0) with pdu=0x2000190fef90 00:19:44.114 [2024-07-15 11:38:21.532394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:44.114 [2024-07-15 11:38:21.532436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:44.114 [2024-07-15 11:38:21.537596] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407bc0) with pdu=0x2000190fef90 00:19:44.114 [2024-07-15 11:38:21.537829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:44.114 [2024-07-15 11:38:21.537868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:44.114 [2024-07-15 11:38:21.543065] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407bc0) with pdu=0x2000190fef90 00:19:44.114 [2024-07-15 11:38:21.543289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:44.114 [2024-07-15 11:38:21.543326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:44.114 [2024-07-15 11:38:21.548485] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407bc0) with pdu=0x2000190fef90 00:19:44.114 [2024-07-15 11:38:21.548733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:44.114 [2024-07-15 11:38:21.548777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:44.114 [2024-07-15 11:38:21.553865] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407bc0) with pdu=0x2000190fef90 00:19:44.114 [2024-07-15 11:38:21.554104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:44.114 [2024-07-15 11:38:21.554139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:44.114 [2024-07-15 11:38:21.559258] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407bc0) with pdu=0x2000190fef90 00:19:44.114 [2024-07-15 11:38:21.559476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:44.114 [2024-07-15 11:38:21.559517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:44.114 [2024-07-15 11:38:21.564798] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407bc0) with pdu=0x2000190fef90 00:19:44.114 [2024-07-15 11:38:21.565023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:44.114 [2024-07-15 11:38:21.565064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:44.114 [2024-07-15 11:38:21.570160] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407bc0) with pdu=0x2000190fef90 00:19:44.114 [2024-07-15 11:38:21.570382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:44.114 [2024-07-15 11:38:21.570431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:44.114 [2024-07-15 11:38:21.575500] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407bc0) with pdu=0x2000190fef90 00:19:44.114 [2024-07-15 11:38:21.575732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:44.114 [2024-07-15 11:38:21.575773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:44.114 [2024-07-15 11:38:21.580994] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407bc0) with pdu=0x2000190fef90 00:19:44.114 [2024-07-15 11:38:21.581214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:44.114 [2024-07-15 11:38:21.581255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:44.114 [2024-07-15 11:38:21.586505] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407bc0) with pdu=0x2000190fef90 00:19:44.114 [2024-07-15 11:38:21.586744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:44.115 [2024-07-15 11:38:21.586779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:44.375 [2024-07-15 11:38:21.591957] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407bc0) with pdu=0x2000190fef90 00:19:44.375 [2024-07-15 11:38:21.592176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:44.375 [2024-07-15 11:38:21.592216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:44.375 [2024-07-15 11:38:21.597339] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407bc0) with pdu=0x2000190fef90 00:19:44.375 [2024-07-15 11:38:21.597571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:44.375 [2024-07-15 11:38:21.597609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:44.375 [2024-07-15 11:38:21.602744] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407bc0) with pdu=0x2000190fef90 00:19:44.375 [2024-07-15 11:38:21.602975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:44.375 [2024-07-15 11:38:21.603016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:44.375 [2024-07-15 11:38:21.608095] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407bc0) with pdu=0x2000190fef90 00:19:44.375 [2024-07-15 11:38:21.608312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:44.375 [2024-07-15 11:38:21.608350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:44.375 [2024-07-15 11:38:21.613498] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407bc0) with pdu=0x2000190fef90 00:19:44.375 [2024-07-15 11:38:21.613734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:44.375 [2024-07-15 11:38:21.613773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:44.375 [2024-07-15 11:38:21.619003] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407bc0) with pdu=0x2000190fef90 00:19:44.375 [2024-07-15 11:38:21.619232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:44.375 [2024-07-15 11:38:21.619274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:44.375 [2024-07-15 11:38:21.624454] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407bc0) with pdu=0x2000190fef90 00:19:44.375 [2024-07-15 11:38:21.624688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:44.375 [2024-07-15 11:38:21.624718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:44.375 [2024-07-15 11:38:21.629851] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407bc0) with pdu=0x2000190fef90 00:19:44.375 [2024-07-15 11:38:21.630084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:44.375 [2024-07-15 11:38:21.630125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:44.375 [2024-07-15 11:38:21.634811] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407bc0) with pdu=0x2000190fef90 00:19:44.375 [2024-07-15 11:38:21.635017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:44.375 [2024-07-15 11:38:21.635056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:44.375 [2024-07-15 11:38:21.639323] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407bc0) with pdu=0x2000190fef90 00:19:44.375 [2024-07-15 11:38:21.639540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:44.375 [2024-07-15 11:38:21.639590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:44.375 [2024-07-15 11:38:21.643861] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407bc0) with pdu=0x2000190fef90 00:19:44.375 [2024-07-15 11:38:21.644078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:44.375 [2024-07-15 11:38:21.644119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:44.375 [2024-07-15 11:38:21.648459] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407bc0) with pdu=0x2000190fef90 00:19:44.375 [2024-07-15 11:38:21.648676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:44.375 [2024-07-15 11:38:21.648718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:44.375 [2024-07-15 11:38:21.652962] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407bc0) with pdu=0x2000190fef90 00:19:44.375 [2024-07-15 11:38:21.653180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:44.375 [2024-07-15 11:38:21.653221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:44.375 [2024-07-15 11:38:21.657493] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407bc0) with pdu=0x2000190fef90 00:19:44.375 [2024-07-15 11:38:21.657714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:44.375 [2024-07-15 11:38:21.657753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:44.375 [2024-07-15 11:38:21.662052] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407bc0) with pdu=0x2000190fef90 00:19:44.375 [2024-07-15 11:38:21.662256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:44.375 [2024-07-15 11:38:21.662290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:44.375 [2024-07-15 11:38:21.666635] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407bc0) with pdu=0x2000190fef90 00:19:44.375 [2024-07-15 11:38:21.666844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:44.375 [2024-07-15 11:38:21.666882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:44.375 [2024-07-15 11:38:21.671151] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407bc0) with pdu=0x2000190fef90 00:19:44.375 [2024-07-15 11:38:21.671354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:44.375 [2024-07-15 11:38:21.671392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:44.375 [2024-07-15 11:38:21.675808] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407bc0) with pdu=0x2000190fef90 00:19:44.375 [2024-07-15 11:38:21.676022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:44.375 [2024-07-15 11:38:21.676062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:44.375 [2024-07-15 11:38:21.680456] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407bc0) with pdu=0x2000190fef90 00:19:44.375 [2024-07-15 11:38:21.680690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:44.375 [2024-07-15 11:38:21.680723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:44.375 [2024-07-15 11:38:21.684946] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407bc0) with pdu=0x2000190fef90 00:19:44.375 [2024-07-15 11:38:21.685194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:44.375 [2024-07-15 11:38:21.685267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:44.375 [2024-07-15 11:38:21.689533] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407bc0) with pdu=0x2000190fef90 00:19:44.375 [2024-07-15 11:38:21.689762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:44.375 [2024-07-15 11:38:21.689806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:44.375 [2024-07-15 11:38:21.694110] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407bc0) with pdu=0x2000190fef90 00:19:44.375 [2024-07-15 11:38:21.694328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:44.375 [2024-07-15 11:38:21.694370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:44.375 [2024-07-15 11:38:21.698725] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407bc0) with pdu=0x2000190fef90 00:19:44.375 [2024-07-15 11:38:21.698926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:44.375 [2024-07-15 11:38:21.698966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:44.375 [2024-07-15 11:38:21.703408] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407bc0) with pdu=0x2000190fef90 00:19:44.375 [2024-07-15 11:38:21.703627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:44.375 [2024-07-15 11:38:21.703666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:44.375 [2024-07-15 11:38:21.707940] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407bc0) with pdu=0x2000190fef90 00:19:44.375 [2024-07-15 11:38:21.708144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:44.375 [2024-07-15 11:38:21.708179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:44.376 [2024-07-15 11:38:21.712585] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407bc0) with pdu=0x2000190fef90 00:19:44.376 [2024-07-15 11:38:21.712792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:44.376 [2024-07-15 11:38:21.712831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:44.376 [2024-07-15 11:38:21.717218] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407bc0) with pdu=0x2000190fef90 00:19:44.376 [2024-07-15 11:38:21.717431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:44.376 [2024-07-15 11:38:21.717470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:44.376 [2024-07-15 11:38:21.721824] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407bc0) with pdu=0x2000190fef90 00:19:44.376 [2024-07-15 11:38:21.722039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:44.376 [2024-07-15 11:38:21.722072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:44.376 [2024-07-15 11:38:21.726493] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407bc0) with pdu=0x2000190fef90 00:19:44.376 [2024-07-15 11:38:21.726720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:44.376 [2024-07-15 11:38:21.726759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:44.376 [2024-07-15 11:38:21.731003] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407bc0) with pdu=0x2000190fef90 00:19:44.376 [2024-07-15 11:38:21.731223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:44.376 [2024-07-15 11:38:21.731263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:44.376 [2024-07-15 11:38:21.735668] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407bc0) with pdu=0x2000190fef90 00:19:44.376 [2024-07-15 11:38:21.735876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:44.376 [2024-07-15 11:38:21.735913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:44.376 [2024-07-15 11:38:21.740185] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407bc0) with pdu=0x2000190fef90 00:19:44.376 [2024-07-15 11:38:21.740388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:44.376 [2024-07-15 11:38:21.740426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:44.376 [2024-07-15 11:38:21.744726] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407bc0) with pdu=0x2000190fef90 00:19:44.376 [2024-07-15 11:38:21.744939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:44.376 [2024-07-15 11:38:21.744978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:44.376 [2024-07-15 11:38:21.749313] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407bc0) with pdu=0x2000190fef90 00:19:44.376 [2024-07-15 11:38:21.749532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:44.376 [2024-07-15 11:38:21.749588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:44.376 [2024-07-15 11:38:21.753869] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407bc0) with pdu=0x2000190fef90 00:19:44.376 [2024-07-15 11:38:21.754117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:44.376 [2024-07-15 11:38:21.754146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:44.376 [2024-07-15 11:38:21.758400] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407bc0) with pdu=0x2000190fef90 00:19:44.376 [2024-07-15 11:38:21.758623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:44.376 [2024-07-15 11:38:21.758666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:44.376 [2024-07-15 11:38:21.762967] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407bc0) with pdu=0x2000190fef90 00:19:44.376 [2024-07-15 11:38:21.763173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:44.376 [2024-07-15 11:38:21.763221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:44.376 [2024-07-15 11:38:21.767446] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407bc0) with pdu=0x2000190fef90 00:19:44.376 [2024-07-15 11:38:21.767663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:44.376 [2024-07-15 11:38:21.767704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:44.376 [2024-07-15 11:38:21.771997] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407bc0) with pdu=0x2000190fef90 00:19:44.376 [2024-07-15 11:38:21.772200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:44.376 [2024-07-15 11:38:21.772246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:44.376 [2024-07-15 11:38:21.776567] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407bc0) with pdu=0x2000190fef90 00:19:44.376 [2024-07-15 11:38:21.776801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:44.376 [2024-07-15 11:38:21.776840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:44.376 [2024-07-15 11:38:21.781170] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407bc0) with pdu=0x2000190fef90 00:19:44.376 [2024-07-15 11:38:21.781383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:44.376 [2024-07-15 11:38:21.781417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:44.376 [2024-07-15 11:38:21.785818] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407bc0) with pdu=0x2000190fef90 00:19:44.376 [2024-07-15 11:38:21.786036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:44.376 [2024-07-15 11:38:21.786074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:44.376 [2024-07-15 11:38:21.790529] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407bc0) with pdu=0x2000190fef90 00:19:44.376 [2024-07-15 11:38:21.790772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:44.376 [2024-07-15 11:38:21.790812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:44.376 [2024-07-15 11:38:21.795075] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407bc0) with pdu=0x2000190fef90 00:19:44.376 [2024-07-15 11:38:21.795284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:44.376 [2024-07-15 11:38:21.795321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:44.376 [2024-07-15 11:38:21.800134] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407bc0) with pdu=0x2000190fef90 00:19:44.376 [2024-07-15 11:38:21.800377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:44.376 [2024-07-15 11:38:21.800413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:44.376 [2024-07-15 11:38:21.805692] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407bc0) with pdu=0x2000190fef90 00:19:44.376 [2024-07-15 11:38:21.805918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:44.376 [2024-07-15 11:38:21.805944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:44.376 [2024-07-15 11:38:21.810338] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407bc0) with pdu=0x2000190fef90 00:19:44.376 [2024-07-15 11:38:21.810566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:44.376 [2024-07-15 11:38:21.810609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:44.376 [2024-07-15 11:38:21.814926] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407bc0) with pdu=0x2000190fef90 00:19:44.376 [2024-07-15 11:38:21.815135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:44.376 [2024-07-15 11:38:21.815171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:44.376 [2024-07-15 11:38:21.819436] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407bc0) with pdu=0x2000190fef90 00:19:44.376 [2024-07-15 11:38:21.819668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:44.376 [2024-07-15 11:38:21.819706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:44.376 [2024-07-15 11:38:21.824015] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407bc0) with pdu=0x2000190fef90 00:19:44.376 [2024-07-15 11:38:21.824230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:44.376 [2024-07-15 11:38:21.824266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:44.376 [2024-07-15 11:38:21.828640] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407bc0) with pdu=0x2000190fef90 00:19:44.376 [2024-07-15 11:38:21.828885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:44.376 [2024-07-15 11:38:21.828915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:44.376 [2024-07-15 11:38:21.833640] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407bc0) with pdu=0x2000190fef90 00:19:44.376 [2024-07-15 11:38:21.833872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:44.376 [2024-07-15 11:38:21.833922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:44.376 [2024-07-15 11:38:21.838308] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407bc0) with pdu=0x2000190fef90 00:19:44.376 [2024-07-15 11:38:21.838535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:44.376 [2024-07-15 11:38:21.838587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:44.376 [2024-07-15 11:38:21.843057] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407bc0) with pdu=0x2000190fef90 00:19:44.376 [2024-07-15 11:38:21.843270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:44.376 [2024-07-15 11:38:21.843311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:44.377 [2024-07-15 11:38:21.847681] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407bc0) with pdu=0x2000190fef90 00:19:44.377 [2024-07-15 11:38:21.847910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:44.377 [2024-07-15 11:38:21.847953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:44.636 [2024-07-15 11:38:21.852240] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407bc0) with pdu=0x2000190fef90 00:19:44.636 [2024-07-15 11:38:21.852454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:44.636 [2024-07-15 11:38:21.852479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:44.636 [2024-07-15 11:38:21.856833] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407bc0) with pdu=0x2000190fef90 00:19:44.636 [2024-07-15 11:38:21.857041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:44.636 [2024-07-15 11:38:21.857077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:44.636 [2024-07-15 11:38:21.861282] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407bc0) with pdu=0x2000190fef90 00:19:44.636 [2024-07-15 11:38:21.861504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:44.636 [2024-07-15 11:38:21.861531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:44.636 [2024-07-15 11:38:21.865908] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407bc0) with pdu=0x2000190fef90 00:19:44.636 [2024-07-15 11:38:21.866123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:44.636 [2024-07-15 11:38:21.866171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:44.636 [2024-07-15 11:38:21.870409] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407bc0) with pdu=0x2000190fef90 00:19:44.636 [2024-07-15 11:38:21.870631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:44.636 [2024-07-15 11:38:21.870667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:44.636 [2024-07-15 11:38:21.874930] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407bc0) with pdu=0x2000190fef90 00:19:44.636 [2024-07-15 11:38:21.875151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:44.636 [2024-07-15 11:38:21.875187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:44.636 [2024-07-15 11:38:21.879487] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407bc0) with pdu=0x2000190fef90 00:19:44.636 [2024-07-15 11:38:21.879716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:44.636 [2024-07-15 11:38:21.879741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:44.636 [2024-07-15 11:38:21.884006] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407bc0) with pdu=0x2000190fef90 00:19:44.636 [2024-07-15 11:38:21.884218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:44.636 [2024-07-15 11:38:21.884256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:44.636 [2024-07-15 11:38:21.888560] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407bc0) with pdu=0x2000190fef90 00:19:44.636 [2024-07-15 11:38:21.888776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:44.636 [2024-07-15 11:38:21.888802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:44.636 [2024-07-15 11:38:21.893036] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407bc0) with pdu=0x2000190fef90 00:19:44.636 [2024-07-15 11:38:21.893244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:44.636 [2024-07-15 11:38:21.893284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:44.636 [2024-07-15 11:38:21.897589] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407bc0) with pdu=0x2000190fef90 00:19:44.636 [2024-07-15 11:38:21.897798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:44.636 [2024-07-15 11:38:21.897832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:44.636 [2024-07-15 11:38:21.902121] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407bc0) with pdu=0x2000190fef90 00:19:44.636 [2024-07-15 11:38:21.902330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:44.636 [2024-07-15 11:38:21.902365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:44.636 [2024-07-15 11:38:21.906701] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407bc0) with pdu=0x2000190fef90 00:19:44.636 [2024-07-15 11:38:21.906939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:44.636 [2024-07-15 11:38:21.906966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:44.636 [2024-07-15 11:38:21.911329] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407bc0) with pdu=0x2000190fef90 00:19:44.636 [2024-07-15 11:38:21.911590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:44.636 [2024-07-15 11:38:21.911620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:44.637 [2024-07-15 11:38:21.915904] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407bc0) with pdu=0x2000190fef90 00:19:44.637 [2024-07-15 11:38:21.916136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:44.637 [2024-07-15 11:38:21.916173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:44.637 [2024-07-15 11:38:21.920423] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407bc0) with pdu=0x2000190fef90 00:19:44.637 [2024-07-15 11:38:21.920641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:44.637 [2024-07-15 11:38:21.920676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:44.637 [2024-07-15 11:38:21.924958] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407bc0) with pdu=0x2000190fef90 00:19:44.637 [2024-07-15 11:38:21.925164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:44.637 [2024-07-15 11:38:21.925197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:44.637 [2024-07-15 11:38:21.929394] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407bc0) with pdu=0x2000190fef90 00:19:44.637 [2024-07-15 11:38:21.929626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:44.637 [2024-07-15 11:38:21.929662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:44.637 [2024-07-15 11:38:21.933910] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407bc0) with pdu=0x2000190fef90 00:19:44.637 [2024-07-15 11:38:21.934113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:44.637 [2024-07-15 11:38:21.934152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:44.637 [2024-07-15 11:38:21.938464] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407bc0) with pdu=0x2000190fef90 00:19:44.637 [2024-07-15 11:38:21.938689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:44.637 [2024-07-15 11:38:21.938716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:44.637 [2024-07-15 11:38:21.943042] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407bc0) with pdu=0x2000190fef90 00:19:44.637 [2024-07-15 11:38:21.943245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:44.637 [2024-07-15 11:38:21.943280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:44.637 [2024-07-15 11:38:21.947541] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407bc0) with pdu=0x2000190fef90 00:19:44.637 [2024-07-15 11:38:21.947777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:44.637 [2024-07-15 11:38:21.947814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:44.637 [2024-07-15 11:38:21.952023] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407bc0) with pdu=0x2000190fef90 00:19:44.637 [2024-07-15 11:38:21.952227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:44.637 [2024-07-15 11:38:21.952261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:44.637 [2024-07-15 11:38:21.956493] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407bc0) with pdu=0x2000190fef90 00:19:44.637 [2024-07-15 11:38:21.956711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:44.637 [2024-07-15 11:38:21.956745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:44.637 [2024-07-15 11:38:21.961006] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407bc0) with pdu=0x2000190fef90 00:19:44.637 [2024-07-15 11:38:21.961213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:44.637 [2024-07-15 11:38:21.961240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:44.637 [2024-07-15 11:38:21.965541] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407bc0) with pdu=0x2000190fef90 00:19:44.637 [2024-07-15 11:38:21.965798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:44.637 [2024-07-15 11:38:21.965826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:44.637 [2024-07-15 11:38:21.970114] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407bc0) with pdu=0x2000190fef90 00:19:44.637 [2024-07-15 11:38:21.970347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:44.637 [2024-07-15 11:38:21.970375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:44.637 [2024-07-15 11:38:21.974799] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407bc0) with pdu=0x2000190fef90 00:19:44.637 [2024-07-15 11:38:21.975028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:44.637 [2024-07-15 11:38:21.975055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:44.637 [2024-07-15 11:38:21.979332] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407bc0) with pdu=0x2000190fef90 00:19:44.637 [2024-07-15 11:38:21.979565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:44.637 [2024-07-15 11:38:21.979595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:44.637 [2024-07-15 11:38:21.983852] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407bc0) with pdu=0x2000190fef90 00:19:44.637 [2024-07-15 11:38:21.984073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:44.637 [2024-07-15 11:38:21.984108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:44.637 [2024-07-15 11:38:21.988410] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407bc0) with pdu=0x2000190fef90 00:19:44.637 [2024-07-15 11:38:21.988634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:44.637 [2024-07-15 11:38:21.988660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:44.637 [2024-07-15 11:38:21.992989] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407bc0) with pdu=0x2000190fef90 00:19:44.637 [2024-07-15 11:38:21.993243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:44.637 [2024-07-15 11:38:21.993284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:44.637 [2024-07-15 11:38:21.997569] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407bc0) with pdu=0x2000190fef90 00:19:44.637 [2024-07-15 11:38:21.997787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:44.637 [2024-07-15 11:38:21.997819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:44.637 [2024-07-15 11:38:22.002582] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407bc0) with pdu=0x2000190fef90 00:19:44.637 [2024-07-15 11:38:22.002807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:44.637 [2024-07-15 11:38:22.002850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:44.637 [2024-07-15 11:38:22.007242] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407bc0) with pdu=0x2000190fef90 00:19:44.637 [2024-07-15 11:38:22.007470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:44.637 [2024-07-15 11:38:22.007500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:44.637 [2024-07-15 11:38:22.011805] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407bc0) with pdu=0x2000190fef90 00:19:44.637 [2024-07-15 11:38:22.012017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:44.637 [2024-07-15 11:38:22.012066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:44.637 [2024-07-15 11:38:22.016253] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407bc0) with pdu=0x2000190fef90 00:19:44.637 [2024-07-15 11:38:22.016462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:44.637 [2024-07-15 11:38:22.016509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:44.637 [2024-07-15 11:38:22.020802] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407bc0) with pdu=0x2000190fef90 00:19:44.637 [2024-07-15 11:38:22.021025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:44.637 [2024-07-15 11:38:22.021070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:44.637 [2024-07-15 11:38:22.025328] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407bc0) with pdu=0x2000190fef90 00:19:44.637 [2024-07-15 11:38:22.025535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:44.637 [2024-07-15 11:38:22.025583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:44.637 [2024-07-15 11:38:22.029854] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407bc0) with pdu=0x2000190fef90 00:19:44.637 [2024-07-15 11:38:22.030081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:44.637 [2024-07-15 11:38:22.030123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:44.637 [2024-07-15 11:38:22.034372] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407bc0) with pdu=0x2000190fef90 00:19:44.637 [2024-07-15 11:38:22.034604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:44.637 [2024-07-15 11:38:22.034637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:44.637 [2024-07-15 11:38:22.038941] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407bc0) with pdu=0x2000190fef90 00:19:44.637 [2024-07-15 11:38:22.039154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:44.637 [2024-07-15 11:38:22.039190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:44.637 [2024-07-15 11:38:22.043420] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407bc0) with pdu=0x2000190fef90 00:19:44.637 [2024-07-15 11:38:22.043636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:44.637 [2024-07-15 11:38:22.043669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:44.638 [2024-07-15 11:38:22.047842] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407bc0) with pdu=0x2000190fef90 00:19:44.638 [2024-07-15 11:38:22.048044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:44.638 [2024-07-15 11:38:22.048079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:44.638 [2024-07-15 11:38:22.052297] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407bc0) with pdu=0x2000190fef90 00:19:44.638 [2024-07-15 11:38:22.052502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:44.638 [2024-07-15 11:38:22.052537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:44.638 [2024-07-15 11:38:22.056802] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407bc0) with pdu=0x2000190fef90 00:19:44.638 [2024-07-15 11:38:22.057010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:44.638 [2024-07-15 11:38:22.057042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:44.638 [2024-07-15 11:38:22.061320] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407bc0) with pdu=0x2000190fef90 00:19:44.638 [2024-07-15 11:38:22.061527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:44.638 [2024-07-15 11:38:22.061575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:44.638 [2024-07-15 11:38:22.065887] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407bc0) with pdu=0x2000190fef90 00:19:44.638 [2024-07-15 11:38:22.066105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:44.638 [2024-07-15 11:38:22.066144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:44.638 [2024-07-15 11:38:22.070323] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407bc0) with pdu=0x2000190fef90 00:19:44.638 [2024-07-15 11:38:22.070539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:44.638 [2024-07-15 11:38:22.070584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:44.638 [2024-07-15 11:38:22.074916] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407bc0) with pdu=0x2000190fef90 00:19:44.638 [2024-07-15 11:38:22.075122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:44.638 [2024-07-15 11:38:22.075154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:44.638 [2024-07-15 11:38:22.079538] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407bc0) with pdu=0x2000190fef90 00:19:44.638 [2024-07-15 11:38:22.079789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:44.638 [2024-07-15 11:38:22.079823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:44.638 [2024-07-15 11:38:22.084140] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407bc0) with pdu=0x2000190fef90 00:19:44.638 [2024-07-15 11:38:22.084369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:44.638 [2024-07-15 11:38:22.084395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:44.638 [2024-07-15 11:38:22.088707] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407bc0) with pdu=0x2000190fef90 00:19:44.638 [2024-07-15 11:38:22.088948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:44.638 [2024-07-15 11:38:22.088979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:44.638 [2024-07-15 11:38:22.093301] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407bc0) with pdu=0x2000190fef90 00:19:44.638 [2024-07-15 11:38:22.093542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:44.638 [2024-07-15 11:38:22.093588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:44.638 [2024-07-15 11:38:22.097931] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407bc0) with pdu=0x2000190fef90 00:19:44.638 [2024-07-15 11:38:22.098160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:44.638 [2024-07-15 11:38:22.098202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:44.638 [2024-07-15 11:38:22.102449] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407bc0) with pdu=0x2000190fef90 00:19:44.638 [2024-07-15 11:38:22.102677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:44.638 [2024-07-15 11:38:22.102712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:44.638 [2024-07-15 11:38:22.106928] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407bc0) with pdu=0x2000190fef90 00:19:44.638 [2024-07-15 11:38:22.107140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:44.638 [2024-07-15 11:38:22.107184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:44.898 [2024-07-15 11:38:22.111384] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407bc0) with pdu=0x2000190fef90 00:19:44.898 [2024-07-15 11:38:22.111610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:44.898 [2024-07-15 11:38:22.111645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:44.898 [2024-07-15 11:38:22.115935] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407bc0) with pdu=0x2000190fef90 00:19:44.898 [2024-07-15 11:38:22.116138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:44.898 [2024-07-15 11:38:22.116164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:44.898 [2024-07-15 11:38:22.120618] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407bc0) with pdu=0x2000190fef90 00:19:44.898 [2024-07-15 11:38:22.120861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:44.898 [2024-07-15 11:38:22.120903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:44.898 [2024-07-15 11:38:22.125242] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407bc0) with pdu=0x2000190fef90 00:19:44.898 [2024-07-15 11:38:22.125459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:44.898 [2024-07-15 11:38:22.125501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:44.898 [2024-07-15 11:38:22.129765] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407bc0) with pdu=0x2000190fef90 00:19:44.898 [2024-07-15 11:38:22.129985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:44.898 [2024-07-15 11:38:22.130027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:44.898 [2024-07-15 11:38:22.134280] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407bc0) with pdu=0x2000190fef90 00:19:44.898 [2024-07-15 11:38:22.134487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:44.898 [2024-07-15 11:38:22.134526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:44.898 [2024-07-15 11:38:22.138951] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407bc0) with pdu=0x2000190fef90 00:19:44.898 [2024-07-15 11:38:22.139158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:44.898 [2024-07-15 11:38:22.139196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:44.898 [2024-07-15 11:38:22.143441] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407bc0) with pdu=0x2000190fef90 00:19:44.898 [2024-07-15 11:38:22.143672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:44.898 [2024-07-15 11:38:22.143708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:44.898 [2024-07-15 11:38:22.147954] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407bc0) with pdu=0x2000190fef90 00:19:44.898 [2024-07-15 11:38:22.148186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:44.898 [2024-07-15 11:38:22.148211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:44.898 [2024-07-15 11:38:22.152525] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407bc0) with pdu=0x2000190fef90 00:19:44.898 [2024-07-15 11:38:22.152783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:44.898 [2024-07-15 11:38:22.152818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:44.898 [2024-07-15 11:38:22.157419] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407bc0) with pdu=0x2000190fef90 00:19:44.898 [2024-07-15 11:38:22.157650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:44.898 [2024-07-15 11:38:22.157686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:44.899 [2024-07-15 11:38:22.162156] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407bc0) with pdu=0x2000190fef90 00:19:44.899 [2024-07-15 11:38:22.162379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:44.899 [2024-07-15 11:38:22.162414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:44.899 [2024-07-15 11:38:22.166945] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407bc0) with pdu=0x2000190fef90 00:19:44.899 [2024-07-15 11:38:22.167169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:44.899 [2024-07-15 11:38:22.167196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:44.899 [2024-07-15 11:38:22.171676] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407bc0) with pdu=0x2000190fef90 00:19:44.899 [2024-07-15 11:38:22.171892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:44.899 [2024-07-15 11:38:22.171929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:44.899 [2024-07-15 11:38:22.176363] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407bc0) with pdu=0x2000190fef90 00:19:44.899 [2024-07-15 11:38:22.176589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:44.899 [2024-07-15 11:38:22.176624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:44.899 [2024-07-15 11:38:22.181141] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407bc0) with pdu=0x2000190fef90 00:19:44.899 [2024-07-15 11:38:22.181376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:44.899 [2024-07-15 11:38:22.181427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:44.899 [2024-07-15 11:38:22.185917] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407bc0) with pdu=0x2000190fef90 00:19:44.899 [2024-07-15 11:38:22.186157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:44.899 [2024-07-15 11:38:22.186213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:44.899 [2024-07-15 11:38:22.190785] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407bc0) with pdu=0x2000190fef90 00:19:44.899 [2024-07-15 11:38:22.191001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:44.899 [2024-07-15 11:38:22.191042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:44.899 [2024-07-15 11:38:22.195461] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407bc0) with pdu=0x2000190fef90 00:19:44.899 [2024-07-15 11:38:22.195701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:44.899 [2024-07-15 11:38:22.195739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:44.899 [2024-07-15 11:38:22.200386] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407bc0) with pdu=0x2000190fef90 00:19:44.899 [2024-07-15 11:38:22.200706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:44.899 [2024-07-15 11:38:22.200741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:44.899 [2024-07-15 11:38:22.205236] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407bc0) with pdu=0x2000190fef90 00:19:44.899 [2024-07-15 11:38:22.205478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:44.899 [2024-07-15 11:38:22.205517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:44.899 [2024-07-15 11:38:22.209969] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407bc0) with pdu=0x2000190fef90 00:19:44.899 [2024-07-15 11:38:22.210185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:44.899 [2024-07-15 11:38:22.210222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:44.899 [2024-07-15 11:38:22.214421] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407bc0) with pdu=0x2000190fef90 00:19:44.899 [2024-07-15 11:38:22.214647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:44.899 [2024-07-15 11:38:22.214683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:44.899 [2024-07-15 11:38:22.218891] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407bc0) with pdu=0x2000190fef90 00:19:44.899 [2024-07-15 11:38:22.219106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:44.899 [2024-07-15 11:38:22.219166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:44.899 [2024-07-15 11:38:22.223435] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407bc0) with pdu=0x2000190fef90 00:19:44.899 [2024-07-15 11:38:22.223661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:44.899 [2024-07-15 11:38:22.223716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:44.899 [2024-07-15 11:38:22.228004] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407bc0) with pdu=0x2000190fef90 00:19:44.899 [2024-07-15 11:38:22.228223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:44.899 [2024-07-15 11:38:22.228279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:44.899 [2024-07-15 11:38:22.232469] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407bc0) with pdu=0x2000190fef90 00:19:44.899 [2024-07-15 11:38:22.232726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:44.899 [2024-07-15 11:38:22.232784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:44.899 [2024-07-15 11:38:22.237018] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407bc0) with pdu=0x2000190fef90 00:19:44.899 [2024-07-15 11:38:22.237239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:44.899 [2024-07-15 11:38:22.237302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:44.899 [2024-07-15 11:38:22.241500] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407bc0) with pdu=0x2000190fef90 00:19:44.899 [2024-07-15 11:38:22.241744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:44.899 [2024-07-15 11:38:22.241797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:44.899 [2024-07-15 11:38:22.246039] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407bc0) with pdu=0x2000190fef90 00:19:44.899 [2024-07-15 11:38:22.246252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:44.899 [2024-07-15 11:38:22.246306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:44.899 [2024-07-15 11:38:22.250570] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407bc0) with pdu=0x2000190fef90 00:19:44.899 [2024-07-15 11:38:22.250789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:44.899 [2024-07-15 11:38:22.250843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:44.899 [2024-07-15 11:38:22.255084] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407bc0) with pdu=0x2000190fef90 00:19:44.899 [2024-07-15 11:38:22.255305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:44.899 [2024-07-15 11:38:22.255366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:44.899 [2024-07-15 11:38:22.259569] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407bc0) with pdu=0x2000190fef90 00:19:44.899 [2024-07-15 11:38:22.259801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:44.899 [2024-07-15 11:38:22.259871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:44.899 [2024-07-15 11:38:22.263995] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407bc0) with pdu=0x2000190fef90 00:19:44.899 [2024-07-15 11:38:22.264234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:44.899 [2024-07-15 11:38:22.264266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:44.899 [2024-07-15 11:38:22.268671] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407bc0) with pdu=0x2000190fef90 00:19:44.899 [2024-07-15 11:38:22.268917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:44.899 [2024-07-15 11:38:22.268950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:44.899 [2024-07-15 11:38:22.273151] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407bc0) with pdu=0x2000190fef90 00:19:44.899 [2024-07-15 11:38:22.273381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:44.899 [2024-07-15 11:38:22.273434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:44.899 [2024-07-15 11:38:22.277651] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407bc0) with pdu=0x2000190fef90 00:19:44.899 [2024-07-15 11:38:22.277865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:44.899 [2024-07-15 11:38:22.277919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:44.899 [2024-07-15 11:38:22.282162] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407bc0) with pdu=0x2000190fef90 00:19:44.899 [2024-07-15 11:38:22.282373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:44.899 [2024-07-15 11:38:22.282416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:44.899 [2024-07-15 11:38:22.286671] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407bc0) with pdu=0x2000190fef90 00:19:44.899 [2024-07-15 11:38:22.286884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:44.899 [2024-07-15 11:38:22.286926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:44.899 [2024-07-15 11:38:22.291182] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407bc0) with pdu=0x2000190fef90 00:19:44.899 [2024-07-15 11:38:22.291390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:44.899 [2024-07-15 11:38:22.291434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:44.900 [2024-07-15 11:38:22.295636] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407bc0) with pdu=0x2000190fef90 00:19:44.900 [2024-07-15 11:38:22.295852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:44.900 [2024-07-15 11:38:22.295893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:44.900 [2024-07-15 11:38:22.300106] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407bc0) with pdu=0x2000190fef90 00:19:44.900 [2024-07-15 11:38:22.300313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:44.900 [2024-07-15 11:38:22.300356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:44.900 [2024-07-15 11:38:22.304610] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407bc0) with pdu=0x2000190fef90 00:19:44.900 [2024-07-15 11:38:22.304815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:44.900 [2024-07-15 11:38:22.304855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:44.900 [2024-07-15 11:38:22.309128] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407bc0) with pdu=0x2000190fef90 00:19:44.900 [2024-07-15 11:38:22.309343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:44.900 [2024-07-15 11:38:22.309383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:44.900 [2024-07-15 11:38:22.313586] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407bc0) with pdu=0x2000190fef90 00:19:44.900 [2024-07-15 11:38:22.313794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:44.900 [2024-07-15 11:38:22.313839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:44.900 [2024-07-15 11:38:22.318091] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407bc0) with pdu=0x2000190fef90 00:19:44.900 [2024-07-15 11:38:22.318297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:44.900 [2024-07-15 11:38:22.318339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:44.900 [2024-07-15 11:38:22.322516] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407bc0) with pdu=0x2000190fef90 00:19:44.900 [2024-07-15 11:38:22.322733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:44.900 [2024-07-15 11:38:22.322772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:44.900 [2024-07-15 11:38:22.326983] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407bc0) with pdu=0x2000190fef90 00:19:44.900 [2024-07-15 11:38:22.327188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:44.900 [2024-07-15 11:38:22.327228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:44.900 [2024-07-15 11:38:22.331542] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407bc0) with pdu=0x2000190fef90 00:19:44.900 [2024-07-15 11:38:22.331763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:44.900 [2024-07-15 11:38:22.331805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:44.900 [2024-07-15 11:38:22.336049] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407bc0) with pdu=0x2000190fef90 00:19:44.900 [2024-07-15 11:38:22.336279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:44.900 [2024-07-15 11:38:22.336319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:44.900 [2024-07-15 11:38:22.340871] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407bc0) with pdu=0x2000190fef90 00:19:44.900 [2024-07-15 11:38:22.341082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:44.900 [2024-07-15 11:38:22.341123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:44.900 [2024-07-15 11:38:22.345350] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407bc0) with pdu=0x2000190fef90 00:19:44.900 [2024-07-15 11:38:22.345569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:44.900 [2024-07-15 11:38:22.345603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:44.900 [2024-07-15 11:38:22.349875] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407bc0) with pdu=0x2000190fef90 00:19:44.900 [2024-07-15 11:38:22.350087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:44.900 [2024-07-15 11:38:22.350126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:44.900 [2024-07-15 11:38:22.354364] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407bc0) with pdu=0x2000190fef90 00:19:44.900 [2024-07-15 11:38:22.354582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:44.900 [2024-07-15 11:38:22.354621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:44.900 [2024-07-15 11:38:22.358866] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407bc0) with pdu=0x2000190fef90 00:19:44.900 [2024-07-15 11:38:22.359070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:44.900 [2024-07-15 11:38:22.359110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:44.900 [2024-07-15 11:38:22.363310] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407bc0) with pdu=0x2000190fef90 00:19:44.900 [2024-07-15 11:38:22.363517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:44.900 [2024-07-15 11:38:22.363572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:44.900 [2024-07-15 11:38:22.367825] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407bc0) with pdu=0x2000190fef90 00:19:44.900 [2024-07-15 11:38:22.368043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:44.900 [2024-07-15 11:38:22.368084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:44.900 [2024-07-15 11:38:22.372312] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407bc0) with pdu=0x2000190fef90 00:19:44.900 [2024-07-15 11:38:22.372523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:44.900 [2024-07-15 11:38:22.372577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:45.158 [2024-07-15 11:38:22.376833] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407bc0) with pdu=0x2000190fef90 00:19:45.158 [2024-07-15 11:38:22.377035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.158 [2024-07-15 11:38:22.377074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:45.158 [2024-07-15 11:38:22.381325] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407bc0) with pdu=0x2000190fef90 00:19:45.158 [2024-07-15 11:38:22.381527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.158 [2024-07-15 11:38:22.381578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:45.158 [2024-07-15 11:38:22.385962] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407bc0) with pdu=0x2000190fef90 00:19:45.158 [2024-07-15 11:38:22.386167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.158 [2024-07-15 11:38:22.386206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:45.158 [2024-07-15 11:38:22.390443] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1407bc0) with pdu=0x2000190fef90 00:19:45.158 [2024-07-15 11:38:22.390674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.158 [2024-07-15 11:38:22.390712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:45.158 00:19:45.158 Latency(us) 00:19:45.158 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:45.158 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:19:45.158 nvme0n1 : 2.00 6184.93 773.12 0.00 0.00 2581.10 1921.40 9711.24 00:19:45.158 =================================================================================================================== 00:19:45.158 Total : 6184.93 773.12 0.00 0.00 2581.10 1921.40 9711.24 00:19:45.158 0 00:19:45.158 11:38:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:19:45.158 11:38:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:19:45.158 11:38:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:19:45.158 | .driver_specific 00:19:45.158 | .nvme_error 00:19:45.158 | .status_code 00:19:45.158 | .command_transient_transport_error' 00:19:45.158 11:38:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:19:45.416 11:38:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 399 > 0 )) 00:19:45.416 11:38:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 93822 00:19:45.416 11:38:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 93822 ']' 00:19:45.416 11:38:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 93822 00:19:45.416 11:38:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:19:45.416 11:38:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:45.416 11:38:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 93822 00:19:45.416 11:38:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:19:45.416 11:38:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:19:45.416 11:38:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 93822' 00:19:45.416 killing process with pid 93822 00:19:45.416 11:38:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 93822 00:19:45.416 Received shutdown signal, test time was about 2.000000 seconds 00:19:45.416 00:19:45.416 Latency(us) 00:19:45.416 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:45.416 =================================================================================================================== 00:19:45.416 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:45.416 11:38:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 93822 00:19:45.674 11:38:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 93535 00:19:45.674 11:38:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 93535 ']' 00:19:45.674 11:38:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 93535 00:19:45.674 11:38:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:19:45.674 11:38:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:45.674 11:38:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 93535 00:19:45.674 11:38:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:19:45.674 11:38:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:19:45.674 killing process with pid 93535 00:19:45.674 11:38:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 93535' 00:19:45.674 11:38:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 93535 00:19:45.674 11:38:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 93535 00:19:45.674 00:19:45.674 real 0m17.299s 00:19:45.674 user 0m33.093s 00:19:45.674 sys 0m4.412s 00:19:45.674 11:38:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1124 -- # xtrace_disable 00:19:45.674 11:38:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:19:45.674 ************************************ 00:19:45.674 END TEST nvmf_digest_error 00:19:45.674 ************************************ 00:19:45.674 11:38:23 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1142 -- # return 0 00:19:45.674 11:38:23 nvmf_tcp.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:19:45.674 11:38:23 nvmf_tcp.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:19:45.674 11:38:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@488 -- # nvmfcleanup 00:19:45.674 11:38:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@117 -- # sync 00:19:45.936 11:38:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:45.936 11:38:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@120 -- # set +e 00:19:45.936 11:38:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:45.936 11:38:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:45.936 rmmod nvme_tcp 00:19:45.936 rmmod nvme_fabrics 00:19:45.936 rmmod nvme_keyring 00:19:45.936 11:38:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:45.936 11:38:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@124 -- # set -e 00:19:45.936 11:38:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@125 -- # return 0 00:19:45.936 11:38:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@489 -- # '[' -n 93535 ']' 00:19:45.936 11:38:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@490 -- # killprocess 93535 00:19:45.936 11:38:23 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@948 -- # '[' -z 93535 ']' 00:19:45.936 11:38:23 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@952 -- # kill -0 93535 00:19:45.936 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 952: kill: (93535) - No such process 00:19:45.936 Process with pid 93535 is not found 00:19:45.936 11:38:23 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@975 -- # echo 'Process with pid 93535 is not found' 00:19:45.936 11:38:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:19:45.936 11:38:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:19:45.936 11:38:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:19:45.936 11:38:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:45.936 11:38:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:45.936 11:38:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:45.936 11:38:23 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:45.936 11:38:23 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:45.936 11:38:23 nvmf_tcp.nvmf_digest -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:19:45.936 00:19:45.936 real 0m36.104s 00:19:45.936 user 1m8.469s 00:19:45.936 sys 0m9.085s 00:19:45.936 11:38:23 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1124 -- # xtrace_disable 00:19:45.936 11:38:23 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:19:45.936 ************************************ 00:19:45.936 END TEST nvmf_digest 00:19:45.936 ************************************ 00:19:45.936 11:38:23 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:19:45.936 11:38:23 nvmf_tcp -- nvmf/nvmf.sh@111 -- # [[ 1 -eq 1 ]] 00:19:45.936 11:38:23 nvmf_tcp -- nvmf/nvmf.sh@111 -- # [[ tcp == \t\c\p ]] 00:19:45.936 11:38:23 nvmf_tcp -- nvmf/nvmf.sh@113 -- # run_test nvmf_mdns_discovery /home/vagrant/spdk_repo/spdk/test/nvmf/host/mdns_discovery.sh --transport=tcp 00:19:45.936 11:38:23 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:19:45.936 11:38:23 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:45.936 11:38:23 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:19:45.936 ************************************ 00:19:45.936 START TEST nvmf_mdns_discovery 00:19:45.936 ************************************ 00:19:45.936 11:38:23 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/mdns_discovery.sh --transport=tcp 00:19:45.936 * Looking for test storage... 00:19:45.936 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:19:45.936 11:38:23 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:19:45.936 11:38:23 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@7 -- # uname -s 00:19:45.936 11:38:23 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:45.936 11:38:23 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:45.936 11:38:23 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:45.936 11:38:23 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:45.936 11:38:23 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:45.936 11:38:23 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:45.936 11:38:23 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:45.936 11:38:23 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:45.936 11:38:23 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:45.936 11:38:23 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:45.936 11:38:23 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:891080d4-f96c-4735-b9e2-e3ce9892e421 00:19:45.936 11:38:23 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=891080d4-f96c-4735-b9e2-e3ce9892e421 00:19:45.936 11:38:23 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:45.936 11:38:23 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:45.936 11:38:23 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:19:45.936 11:38:23 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:45.936 11:38:23 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:19:45.936 11:38:23 nvmf_tcp.nvmf_mdns_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:45.936 11:38:23 nvmf_tcp.nvmf_mdns_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:45.936 11:38:23 nvmf_tcp.nvmf_mdns_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:45.936 11:38:23 nvmf_tcp.nvmf_mdns_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:45.936 11:38:23 nvmf_tcp.nvmf_mdns_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:45.936 11:38:23 nvmf_tcp.nvmf_mdns_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:45.936 11:38:23 nvmf_tcp.nvmf_mdns_discovery -- paths/export.sh@5 -- # export PATH 00:19:45.936 11:38:23 nvmf_tcp.nvmf_mdns_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:45.936 11:38:23 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@47 -- # : 0 00:19:45.936 11:38:23 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:45.936 11:38:23 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:45.936 11:38:23 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:45.936 11:38:23 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:45.936 11:38:23 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:45.936 11:38:23 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:45.936 11:38:23 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:45.936 11:38:23 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:45.936 11:38:23 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@13 -- # DISCOVERY_FILTER=address 00:19:45.936 11:38:23 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@14 -- # DISCOVERY_PORT=8009 00:19:45.936 11:38:23 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:19:45.936 11:38:23 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@18 -- # NQN=nqn.2016-06.io.spdk:cnode 00:19:45.936 11:38:23 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@19 -- # NQN2=nqn.2016-06.io.spdk:cnode2 00:19:45.936 11:38:23 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@21 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:19:45.936 11:38:23 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@22 -- # HOST_SOCK=/tmp/host.sock 00:19:45.936 11:38:23 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@24 -- # nvmftestinit 00:19:45.936 11:38:23 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:19:45.936 11:38:23 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:45.936 11:38:23 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:19:45.936 11:38:23 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:19:45.936 11:38:23 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:19:45.936 11:38:23 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:45.936 11:38:23 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:45.937 11:38:23 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:45.937 11:38:23 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:19:45.937 11:38:23 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:19:45.937 11:38:23 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:19:45.937 11:38:23 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:19:45.937 11:38:23 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:19:45.937 11:38:23 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@432 -- # nvmf_veth_init 00:19:45.937 11:38:23 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:46.202 11:38:23 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:46.202 11:38:23 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:19:46.202 11:38:23 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:19:46.202 11:38:23 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:19:46.202 11:38:23 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:19:46.202 11:38:23 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:19:46.202 11:38:23 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:46.202 11:38:23 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:19:46.202 11:38:23 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:19:46.202 11:38:23 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:19:46.202 11:38:23 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:19:46.202 11:38:23 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:19:46.202 11:38:23 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:19:46.202 Cannot find device "nvmf_tgt_br" 00:19:46.202 11:38:23 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@155 -- # true 00:19:46.202 11:38:23 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:19:46.202 Cannot find device "nvmf_tgt_br2" 00:19:46.202 11:38:23 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@156 -- # true 00:19:46.202 11:38:23 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:19:46.202 11:38:23 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:19:46.202 Cannot find device "nvmf_tgt_br" 00:19:46.202 11:38:23 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@158 -- # true 00:19:46.202 11:38:23 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:19:46.202 Cannot find device "nvmf_tgt_br2" 00:19:46.202 11:38:23 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@159 -- # true 00:19:46.202 11:38:23 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:19:46.202 11:38:23 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:19:46.202 11:38:23 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:46.202 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:46.202 11:38:23 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@162 -- # true 00:19:46.202 11:38:23 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:46.202 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:46.202 11:38:23 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@163 -- # true 00:19:46.202 11:38:23 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:19:46.202 11:38:23 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:19:46.202 11:38:23 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:19:46.202 11:38:23 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:19:46.202 11:38:23 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:19:46.202 11:38:23 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:19:46.202 11:38:23 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:19:46.202 11:38:23 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:19:46.202 11:38:23 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:19:46.202 11:38:23 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:19:46.202 11:38:23 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:19:46.202 11:38:23 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:19:46.202 11:38:23 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:19:46.202 11:38:23 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:19:46.202 11:38:23 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:19:46.202 11:38:23 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:19:46.202 11:38:23 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:19:46.202 11:38:23 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:19:46.202 11:38:23 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:19:46.202 11:38:23 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:19:46.461 11:38:23 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:19:46.461 11:38:23 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:19:46.461 11:38:23 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:19:46.461 11:38:23 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:19:46.461 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:46.461 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.068 ms 00:19:46.461 00:19:46.461 --- 10.0.0.2 ping statistics --- 00:19:46.461 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:46.461 rtt min/avg/max/mdev = 0.068/0.068/0.068/0.000 ms 00:19:46.461 11:38:23 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:19:46.461 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:19:46.461 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.088 ms 00:19:46.461 00:19:46.461 --- 10.0.0.3 ping statistics --- 00:19:46.461 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:46.461 rtt min/avg/max/mdev = 0.088/0.088/0.088/0.000 ms 00:19:46.461 11:38:23 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:19:46.461 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:46.461 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms 00:19:46.461 00:19:46.461 --- 10.0.0.1 ping statistics --- 00:19:46.461 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:46.461 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:19:46.461 11:38:23 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:46.461 11:38:23 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@433 -- # return 0 00:19:46.461 11:38:23 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:19:46.461 11:38:23 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:46.461 11:38:23 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:19:46.461 11:38:23 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:19:46.461 11:38:23 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:46.461 11:38:23 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:19:46.461 11:38:23 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:19:46.461 11:38:23 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@29 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:19:46.461 11:38:23 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:46.461 11:38:23 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:46.461 11:38:23 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:46.461 11:38:23 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@481 -- # nvmfpid=94117 00:19:46.461 11:38:23 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:19:46.461 11:38:23 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@482 -- # waitforlisten 94117 00:19:46.461 11:38:23 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@829 -- # '[' -z 94117 ']' 00:19:46.461 11:38:23 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:46.461 11:38:23 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:46.461 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:46.461 11:38:23 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:46.461 11:38:23 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:46.461 11:38:23 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:46.461 [2024-07-15 11:38:23.812315] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:19:46.461 [2024-07-15 11:38:23.812437] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:46.719 [2024-07-15 11:38:23.946537] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:46.719 [2024-07-15 11:38:24.006593] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:46.719 [2024-07-15 11:38:24.006653] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:46.719 [2024-07-15 11:38:24.006665] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:46.719 [2024-07-15 11:38:24.006673] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:46.719 [2024-07-15 11:38:24.006681] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:46.719 [2024-07-15 11:38:24.006709] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:47.654 11:38:24 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:47.654 11:38:24 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@862 -- # return 0 00:19:47.654 11:38:24 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:47.654 11:38:24 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:47.654 11:38:24 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:47.654 11:38:24 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:47.654 11:38:24 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@31 -- # rpc_cmd nvmf_set_config --discovery-filter=address 00:19:47.654 11:38:24 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:47.654 11:38:24 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:47.654 11:38:24 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:47.654 11:38:24 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@32 -- # rpc_cmd framework_start_init 00:19:47.654 11:38:24 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:47.654 11:38:24 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:47.654 11:38:24 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:47.654 11:38:24 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@33 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:19:47.654 11:38:24 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:47.654 11:38:24 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:47.654 [2024-07-15 11:38:24.946029] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:47.654 11:38:24 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:47.654 11:38:24 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:19:47.654 11:38:24 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:47.654 11:38:24 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:47.654 [2024-07-15 11:38:24.958182] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:19:47.654 11:38:24 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:47.654 11:38:24 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@36 -- # rpc_cmd bdev_null_create null0 1000 512 00:19:47.654 11:38:24 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:47.654 11:38:24 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:47.654 null0 00:19:47.654 11:38:24 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:47.654 11:38:24 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@37 -- # rpc_cmd bdev_null_create null1 1000 512 00:19:47.654 11:38:24 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:47.654 11:38:24 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:47.654 null1 00:19:47.654 11:38:24 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:47.654 11:38:24 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@38 -- # rpc_cmd bdev_null_create null2 1000 512 00:19:47.654 11:38:24 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:47.654 11:38:24 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:47.654 null2 00:19:47.654 11:38:24 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:47.654 11:38:24 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@39 -- # rpc_cmd bdev_null_create null3 1000 512 00:19:47.654 11:38:24 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:47.654 11:38:24 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:47.654 null3 00:19:47.654 11:38:24 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:47.654 11:38:24 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@40 -- # rpc_cmd bdev_wait_for_examine 00:19:47.654 11:38:24 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:47.654 11:38:24 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:47.654 11:38:25 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:47.654 11:38:25 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@48 -- # hostpid=94167 00:19:47.654 11:38:25 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:19:47.654 11:38:25 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@49 -- # waitforlisten 94167 /tmp/host.sock 00:19:47.654 11:38:25 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@829 -- # '[' -z 94167 ']' 00:19:47.654 11:38:25 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@833 -- # local rpc_addr=/tmp/host.sock 00:19:47.654 11:38:25 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:47.654 11:38:25 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:19:47.654 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:19:47.654 11:38:25 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:47.654 11:38:25 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:47.654 [2024-07-15 11:38:25.092818] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:19:47.654 [2024-07-15 11:38:25.092991] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid94167 ] 00:19:47.912 [2024-07-15 11:38:25.238727] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:47.912 [2024-07-15 11:38:25.298440] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:48.844 11:38:26 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:48.844 11:38:26 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@862 -- # return 0 00:19:48.844 11:38:26 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@51 -- # trap 'process_shm --id $NVMF_APP_SHM_ID;exit 1' SIGINT SIGTERM 00:19:48.844 11:38:26 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@52 -- # trap 'process_shm --id $NVMF_APP_SHM_ID;nvmftestfini;kill $hostpid;kill $avahipid;' EXIT 00:19:48.844 11:38:26 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@56 -- # avahi-daemon --kill 00:19:48.844 11:38:26 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@58 -- # avahipid=94194 00:19:48.844 11:38:26 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@59 -- # sleep 1 00:19:48.844 11:38:26 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@57 -- # echo -e '[server]\nallow-interfaces=nvmf_tgt_if,nvmf_tgt_if2\nuse-ipv4=yes\nuse-ipv6=no' 00:19:48.844 11:38:26 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@57 -- # ip netns exec nvmf_tgt_ns_spdk avahi-daemon -f /dev/fd/63 00:19:48.844 Process 979 died: No such process; trying to remove PID file. (/run/avahi-daemon//pid) 00:19:48.844 Found user 'avahi' (UID 70) and group 'avahi' (GID 70). 00:19:48.844 Successfully dropped root privileges. 00:19:48.844 avahi-daemon 0.8 starting up. 00:19:48.844 WARNING: No NSS support for mDNS detected, consider installing nss-mdns! 00:19:48.844 Successfully called chroot(). 00:19:48.844 Successfully dropped remaining capabilities. 00:19:48.844 No service file found in /etc/avahi/services. 00:19:49.777 Joining mDNS multicast group on interface nvmf_tgt_if2.IPv4 with address 10.0.0.3. 00:19:49.777 New relevant interface nvmf_tgt_if2.IPv4 for mDNS. 00:19:49.777 Joining mDNS multicast group on interface nvmf_tgt_if.IPv4 with address 10.0.0.2. 00:19:49.777 New relevant interface nvmf_tgt_if.IPv4 for mDNS. 00:19:49.777 Network interface enumeration completed. 00:19:49.777 Registering new address record for fe80::388e:13ff:feae:828e on nvmf_tgt_if2.*. 00:19:49.777 Registering new address record for 10.0.0.3 on nvmf_tgt_if2.IPv4. 00:19:49.777 Registering new address record for fe80::e073:5fff:fecc:6446 on nvmf_tgt_if.*. 00:19:49.777 Registering new address record for 10.0.0.2 on nvmf_tgt_if.IPv4. 00:19:49.777 Server startup complete. Host name is fedora38-cloud-1716830599-074-updated-1705279005.local. Local service cookie is 4283373866. 00:19:49.777 11:38:27 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@61 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:19:49.777 11:38:27 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:49.777 11:38:27 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:49.777 11:38:27 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:49.777 11:38:27 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@62 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b mdns -s _nvme-disc._tcp -q nqn.2021-12.io.spdk:test 00:19:49.777 11:38:27 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:49.777 11:38:27 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:49.777 11:38:27 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:49.777 11:38:27 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@86 -- # notify_id=0 00:19:49.777 11:38:27 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@92 -- # get_subsystem_names 00:19:49.777 11:38:27 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:19:49.777 11:38:27 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # jq -r '.[].name' 00:19:49.777 11:38:27 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # sort 00:19:49.777 11:38:27 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:49.777 11:38:27 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:49.777 11:38:27 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # xargs 00:19:49.777 11:38:27 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:49.777 11:38:27 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@92 -- # [[ '' == '' ]] 00:19:49.777 11:38:27 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@93 -- # get_bdev_list 00:19:49.777 11:38:27 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:49.777 11:38:27 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:49.777 11:38:27 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 00:19:49.777 11:38:27 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:49.777 11:38:27 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 00:19:49.777 11:38:27 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 00:19:49.777 11:38:27 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:50.036 11:38:27 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@93 -- # [[ '' == '' ]] 00:19:50.036 11:38:27 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@95 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:19:50.036 11:38:27 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:50.036 11:38:27 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:50.036 11:38:27 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:50.036 11:38:27 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # get_subsystem_names 00:19:50.036 11:38:27 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:19:50.036 11:38:27 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:50.036 11:38:27 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # jq -r '.[].name' 00:19:50.036 11:38:27 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:50.036 11:38:27 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # sort 00:19:50.036 11:38:27 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # xargs 00:19:50.036 11:38:27 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:50.036 11:38:27 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ '' == '' ]] 00:19:50.036 11:38:27 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@97 -- # get_bdev_list 00:19:50.036 11:38:27 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:50.036 11:38:27 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:50.036 11:38:27 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:50.036 11:38:27 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 00:19:50.036 11:38:27 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 00:19:50.036 11:38:27 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 00:19:50.036 11:38:27 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:50.036 11:38:27 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@97 -- # [[ '' == '' ]] 00:19:50.036 11:38:27 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@99 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:19:50.036 11:38:27 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:50.036 11:38:27 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:50.036 11:38:27 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:50.036 11:38:27 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@100 -- # get_subsystem_names 00:19:50.036 11:38:27 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:19:50.036 11:38:27 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:50.036 11:38:27 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:50.036 11:38:27 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # sort 00:19:50.036 11:38:27 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # jq -r '.[].name' 00:19:50.036 11:38:27 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # xargs 00:19:50.036 11:38:27 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:50.036 11:38:27 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@100 -- # [[ '' == '' ]] 00:19:50.036 11:38:27 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@101 -- # get_bdev_list 00:19:50.036 11:38:27 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:50.036 11:38:27 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 00:19:50.036 11:38:27 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:50.036 11:38:27 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:50.036 11:38:27 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 00:19:50.036 11:38:27 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 00:19:50.036 11:38:27 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:50.036 [2024-07-15 11:38:27.468076] bdev_mdns_client.c: 395:mdns_browse_handler: *INFO*: (Browser) CACHE_EXHAUSTED 00:19:50.036 11:38:27 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@101 -- # [[ '' == '' ]] 00:19:50.036 11:38:27 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@105 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:19:50.036 11:38:27 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:50.036 11:38:27 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:50.036 [2024-07-15 11:38:27.503864] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:50.036 11:38:27 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:50.036 11:38:27 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@109 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:19:50.036 11:38:27 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:50.036 11:38:27 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:50.293 11:38:27 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:50.293 11:38:27 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@112 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode20 00:19:50.293 11:38:27 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:50.293 11:38:27 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:50.293 11:38:27 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:50.293 11:38:27 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@113 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode20 null2 00:19:50.293 11:38:27 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:50.294 11:38:27 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:50.294 11:38:27 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:50.294 11:38:27 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@117 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode20 nqn.2021-12.io.spdk:test 00:19:50.294 11:38:27 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:50.294 11:38:27 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:50.294 11:38:27 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:50.294 11:38:27 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@119 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.3 -s 8009 00:19:50.294 11:38:27 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:50.294 11:38:27 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:50.294 [2024-07-15 11:38:27.543887] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 8009 *** 00:19:50.294 11:38:27 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:50.294 11:38:27 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@121 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode20 -t tcp -a 10.0.0.3 -s 4420 00:19:50.294 11:38:27 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:50.294 11:38:27 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:50.294 [2024-07-15 11:38:27.555881] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:19:50.294 11:38:27 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:50.294 11:38:27 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@124 -- # rpc_cmd nvmf_publish_mdns_prr 00:19:50.294 11:38:27 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:50.294 11:38:27 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:50.294 11:38:27 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:50.294 11:38:27 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@125 -- # sleep 5 00:19:51.227 [2024-07-15 11:38:28.368071] bdev_mdns_client.c: 395:mdns_browse_handler: *INFO*: (Browser) ALL_FOR_NOW 00:19:51.794 [2024-07-15 11:38:28.968101] bdev_mdns_client.c: 254:mdns_resolve_handler: *INFO*: Service 'spdk0' of type '_nvme-disc._tcp' in domain 'local' 00:19:51.794 [2024-07-15 11:38:28.968151] bdev_mdns_client.c: 259:mdns_resolve_handler: *INFO*: fedora38-cloud-1716830599-074-updated-1705279005.local:8009 (10.0.0.3) 00:19:51.794 TXT="nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:19:51.794 cookie is 0 00:19:51.794 is_local: 1 00:19:51.794 our_own: 0 00:19:51.794 wide_area: 0 00:19:51.794 multicast: 1 00:19:51.794 cached: 1 00:19:51.794 [2024-07-15 11:38:29.068098] bdev_mdns_client.c: 254:mdns_resolve_handler: *INFO*: Service 'spdk1' of type '_nvme-disc._tcp' in domain 'local' 00:19:51.794 [2024-07-15 11:38:29.068152] bdev_mdns_client.c: 259:mdns_resolve_handler: *INFO*: fedora38-cloud-1716830599-074-updated-1705279005.local:8009 (10.0.0.3) 00:19:51.794 TXT="nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:19:51.794 cookie is 0 00:19:51.794 is_local: 1 00:19:51.794 our_own: 0 00:19:51.794 wide_area: 0 00:19:51.794 multicast: 1 00:19:51.794 cached: 1 00:19:51.794 [2024-07-15 11:38:29.068168] bdev_mdns_client.c: 322:mdns_resolve_handler: *ERROR*: mDNS discovery entry exists already. trid->traddr: 10.0.0.3 trid->trsvcid: 8009 00:19:51.794 [2024-07-15 11:38:29.168099] bdev_mdns_client.c: 254:mdns_resolve_handler: *INFO*: Service 'spdk0' of type '_nvme-disc._tcp' in domain 'local' 00:19:51.794 [2024-07-15 11:38:29.168150] bdev_mdns_client.c: 259:mdns_resolve_handler: *INFO*: fedora38-cloud-1716830599-074-updated-1705279005.local:8009 (10.0.0.2) 00:19:51.794 TXT="nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:19:51.794 cookie is 0 00:19:51.794 is_local: 1 00:19:51.794 our_own: 0 00:19:51.794 wide_area: 0 00:19:51.794 multicast: 1 00:19:51.794 cached: 1 00:19:51.794 [2024-07-15 11:38:29.268102] bdev_mdns_client.c: 254:mdns_resolve_handler: *INFO*: Service 'spdk1' of type '_nvme-disc._tcp' in domain 'local' 00:19:51.794 [2024-07-15 11:38:29.268154] bdev_mdns_client.c: 259:mdns_resolve_handler: *INFO*: fedora38-cloud-1716830599-074-updated-1705279005.local:8009 (10.0.0.2) 00:19:51.794 TXT="nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:19:51.794 cookie is 0 00:19:51.794 is_local: 1 00:19:51.794 our_own: 0 00:19:51.794 wide_area: 0 00:19:51.794 multicast: 1 00:19:51.794 cached: 1 00:19:51.794 [2024-07-15 11:38:29.268171] bdev_mdns_client.c: 322:mdns_resolve_handler: *ERROR*: mDNS discovery entry exists already. trid->traddr: 10.0.0.2 trid->trsvcid: 8009 00:19:52.730 [2024-07-15 11:38:29.981782] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:19:52.730 [2024-07-15 11:38:29.981833] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:19:52.730 [2024-07-15 11:38:29.981854] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:19:52.730 [2024-07-15 11:38:30.067941] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4420 new subsystem mdns0_nvme0 00:19:52.730 [2024-07-15 11:38:30.124999] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach mdns0_nvme0 done 00:19:52.730 [2024-07-15 11:38:30.125048] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4420 found again 00:19:52.730 [2024-07-15 11:38:30.181660] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:19:52.730 [2024-07-15 11:38:30.181710] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:19:52.730 [2024-07-15 11:38:30.181733] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:19:52.989 [2024-07-15 11:38:30.267817] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem mdns1_nvme0 00:19:52.989 [2024-07-15 11:38:30.324135] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach mdns1_nvme0 done 00:19:52.989 [2024-07-15 11:38:30.324188] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:19:55.515 11:38:32 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@127 -- # get_mdns_discovery_svcs 00:19:55.515 11:38:32 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_mdns_discovery_info 00:19:55.515 11:38:32 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:55.515 11:38:32 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # jq -r '.[].name' 00:19:55.515 11:38:32 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # xargs 00:19:55.515 11:38:32 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # sort 00:19:55.515 11:38:32 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:55.515 11:38:32 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:55.515 11:38:32 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@127 -- # [[ mdns == \m\d\n\s ]] 00:19:55.515 11:38:32 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@128 -- # get_discovery_ctrlrs 00:19:55.515 11:38:32 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:19:55.515 11:38:32 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # jq -r '.[].name' 00:19:55.515 11:38:32 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:55.515 11:38:32 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:55.515 11:38:32 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # xargs 00:19:55.515 11:38:32 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # sort 00:19:55.515 11:38:32 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:55.515 11:38:32 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@128 -- # [[ mdns0_nvme mdns1_nvme == \m\d\n\s\0\_\n\v\m\e\ \m\d\n\s\1\_\n\v\m\e ]] 00:19:55.515 11:38:32 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@129 -- # get_subsystem_names 00:19:55.515 11:38:32 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:19:55.515 11:38:32 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # jq -r '.[].name' 00:19:55.515 11:38:32 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:55.515 11:38:32 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:55.515 11:38:32 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # sort 00:19:55.515 11:38:32 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # xargs 00:19:55.515 11:38:32 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:55.515 11:38:32 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@129 -- # [[ mdns0_nvme0 mdns1_nvme0 == \m\d\n\s\0\_\n\v\m\e\0\ \m\d\n\s\1\_\n\v\m\e\0 ]] 00:19:55.515 11:38:32 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@130 -- # get_bdev_list 00:19:55.515 11:38:32 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:55.515 11:38:32 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 00:19:55.515 11:38:32 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 00:19:55.515 11:38:32 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:55.515 11:38:32 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 00:19:55.515 11:38:32 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:55.515 11:38:32 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:55.515 11:38:32 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@130 -- # [[ mdns0_nvme0n1 mdns1_nvme0n1 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\1 ]] 00:19:55.515 11:38:32 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@131 -- # get_subsystem_paths mdns0_nvme0 00:19:55.515 11:38:32 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns0_nvme0 00:19:55.515 11:38:32 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:19:55.515 11:38:32 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:55.515 11:38:32 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:55.515 11:38:32 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # sort -n 00:19:55.515 11:38:32 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # xargs 00:19:55.515 11:38:32 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:55.515 11:38:32 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@131 -- # [[ 4420 == \4\4\2\0 ]] 00:19:55.515 11:38:32 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@132 -- # get_subsystem_paths mdns1_nvme0 00:19:55.515 11:38:32 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:19:55.515 11:38:32 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns1_nvme0 00:19:55.515 11:38:32 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # xargs 00:19:55.515 11:38:32 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:55.515 11:38:32 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:55.515 11:38:32 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # sort -n 00:19:55.515 11:38:32 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:55.515 11:38:32 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@132 -- # [[ 4420 == \4\4\2\0 ]] 00:19:55.515 11:38:32 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@133 -- # get_notification_count 00:19:55.515 11:38:32 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:19:55.515 11:38:32 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:55.515 11:38:32 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:55.515 11:38:32 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # jq '. | length' 00:19:55.515 11:38:32 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:55.515 11:38:32 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # notification_count=2 00:19:55.515 11:38:32 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@89 -- # notify_id=2 00:19:55.515 11:38:32 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@134 -- # [[ 2 == 2 ]] 00:19:55.515 11:38:32 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@137 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:19:55.515 11:38:32 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:55.515 11:38:32 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:55.515 11:38:32 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:55.515 11:38:32 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@138 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode20 null3 00:19:55.515 11:38:32 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:55.515 11:38:32 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:55.774 11:38:32 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:55.774 11:38:32 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@139 -- # sleep 1 00:19:56.711 11:38:33 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@141 -- # get_bdev_list 00:19:56.711 11:38:33 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:56.711 11:38:33 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:56.711 11:38:33 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 00:19:56.711 11:38:33 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:56.711 11:38:33 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 00:19:56.711 11:38:33 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 00:19:56.711 11:38:34 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:56.711 11:38:34 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@141 -- # [[ mdns0_nvme0n1 mdns0_nvme0n2 mdns1_nvme0n1 mdns1_nvme0n2 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\0\_\n\v\m\e\0\n\2\ \m\d\n\s\1\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\2 ]] 00:19:56.711 11:38:34 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@142 -- # get_notification_count 00:19:56.711 11:38:34 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:19:56.711 11:38:34 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:56.711 11:38:34 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:56.711 11:38:34 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # jq '. | length' 00:19:56.711 11:38:34 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:56.711 11:38:34 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # notification_count=2 00:19:56.711 11:38:34 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@89 -- # notify_id=4 00:19:56.711 11:38:34 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@143 -- # [[ 2 == 2 ]] 00:19:56.711 11:38:34 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@147 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:19:56.711 11:38:34 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:56.711 11:38:34 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:56.711 [2024-07-15 11:38:34.110877] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:19:56.711 [2024-07-15 11:38:34.111404] bdev_nvme.c:6965:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:19:56.711 [2024-07-15 11:38:34.111436] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:19:56.711 [2024-07-15 11:38:34.111473] bdev_nvme.c:6965:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:19:56.711 [2024-07-15 11:38:34.111487] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:19:56.711 11:38:34 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:56.711 11:38:34 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@148 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode20 -t tcp -a 10.0.0.3 -s 4421 00:19:56.711 11:38:34 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:56.711 11:38:34 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:56.711 [2024-07-15 11:38:34.118816] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:19:56.711 [2024-07-15 11:38:34.119394] bdev_nvme.c:6965:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:19:56.711 [2024-07-15 11:38:34.119452] bdev_nvme.c:6965:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:19:56.711 11:38:34 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:56.711 11:38:34 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@149 -- # sleep 1 00:19:56.969 [2024-07-15 11:38:34.250520] bdev_nvme.c:6907:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4421 new path for mdns0_nvme0 00:19:56.969 [2024-07-15 11:38:34.252501] bdev_nvme.c:6907:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for mdns1_nvme0 00:19:56.969 [2024-07-15 11:38:34.315780] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach mdns1_nvme0 done 00:19:56.969 [2024-07-15 11:38:34.315830] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:19:56.969 [2024-07-15 11:38:34.315838] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:19:56.969 [2024-07-15 11:38:34.315871] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:19:56.969 [2024-07-15 11:38:34.315921] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach mdns0_nvme0 done 00:19:56.969 [2024-07-15 11:38:34.315932] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4420 found again 00:19:56.969 [2024-07-15 11:38:34.315938] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4421 found again 00:19:56.969 [2024-07-15 11:38:34.315954] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:19:56.969 [2024-07-15 11:38:34.362641] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4420 found again 00:19:56.969 [2024-07-15 11:38:34.362684] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4421 found again 00:19:56.969 [2024-07-15 11:38:34.362733] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:19:56.969 [2024-07-15 11:38:34.362742] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:19:57.899 11:38:35 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@151 -- # get_subsystem_names 00:19:57.899 11:38:35 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:19:57.899 11:38:35 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:57.899 11:38:35 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # jq -r '.[].name' 00:19:57.899 11:38:35 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:57.899 11:38:35 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # sort 00:19:57.899 11:38:35 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # xargs 00:19:57.899 11:38:35 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:57.899 11:38:35 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@151 -- # [[ mdns0_nvme0 mdns1_nvme0 == \m\d\n\s\0\_\n\v\m\e\0\ \m\d\n\s\1\_\n\v\m\e\0 ]] 00:19:57.899 11:38:35 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@152 -- # get_bdev_list 00:19:57.899 11:38:35 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 00:19:57.899 11:38:35 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:57.899 11:38:35 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 00:19:57.899 11:38:35 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:57.899 11:38:35 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 00:19:57.899 11:38:35 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:57.899 11:38:35 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:57.899 11:38:35 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@152 -- # [[ mdns0_nvme0n1 mdns0_nvme0n2 mdns1_nvme0n1 mdns1_nvme0n2 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\0\_\n\v\m\e\0\n\2\ \m\d\n\s\1\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\2 ]] 00:19:57.899 11:38:35 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@153 -- # get_subsystem_paths mdns0_nvme0 00:19:57.899 11:38:35 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns0_nvme0 00:19:57.899 11:38:35 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:19:57.899 11:38:35 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:57.899 11:38:35 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:57.899 11:38:35 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # xargs 00:19:57.899 11:38:35 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # sort -n 00:19:57.899 11:38:35 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:57.899 11:38:35 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@153 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:19:57.899 11:38:35 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@154 -- # get_subsystem_paths mdns1_nvme0 00:19:57.899 11:38:35 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns1_nvme0 00:19:57.899 11:38:35 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:19:57.899 11:38:35 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:57.899 11:38:35 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:57.899 11:38:35 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # sort -n 00:19:57.899 11:38:35 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # xargs 00:19:57.899 11:38:35 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:58.159 11:38:35 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@154 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:19:58.159 11:38:35 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@155 -- # get_notification_count 00:19:58.159 11:38:35 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 4 00:19:58.159 11:38:35 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # jq '. | length' 00:19:58.159 11:38:35 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:58.159 11:38:35 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:58.159 11:38:35 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:58.159 11:38:35 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # notification_count=0 00:19:58.159 11:38:35 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@89 -- # notify_id=4 00:19:58.159 11:38:35 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@156 -- # [[ 0 == 0 ]] 00:19:58.159 11:38:35 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@160 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:19:58.159 11:38:35 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:58.159 11:38:35 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:58.159 [2024-07-15 11:38:35.444364] bdev_nvme.c:6965:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:19:58.159 [2024-07-15 11:38:35.444407] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:19:58.159 [2024-07-15 11:38:35.444446] bdev_nvme.c:6965:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:19:58.159 [2024-07-15 11:38:35.444461] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:19:58.159 [2024-07-15 11:38:35.446489] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:19:58.159 [2024-07-15 11:38:35.446529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.159 [2024-07-15 11:38:35.446734] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:58.159 [2024-07-15 11:38:35.446964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.159 [2024-07-15 11:38:35.447173] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:58.159 [2024-07-15 11:38:35.447395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.159 [2024-07-15 11:38:35.447688] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:58.159 [2024-07-15 11:38:35.447881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.159 [2024-07-15 11:38:35.448078] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd350 is same with the state(5) to be set 00:19:58.159 11:38:35 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:58.159 11:38:35 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@161 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode20 -t tcp -a 10.0.0.3 -s 4420 00:19:58.159 11:38:35 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:58.159 11:38:35 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:58.159 [2024-07-15 11:38:35.456434] bdev_nvme.c:6965:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:19:58.159 [2024-07-15 11:38:35.456507] bdev_nvme.c:6965:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:19:58.159 [2024-07-15 11:38:35.456573] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd350 (9): Bad file descriptor 00:19:58.159 11:38:35 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:58.159 11:38:35 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@162 -- # sleep 1 00:19:58.159 [2024-07-15 11:38:35.464029] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:19:58.159 [2024-07-15 11:38:35.464073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.159 [2024-07-15 11:38:35.464089] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:58.159 [2024-07-15 11:38:35.464099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.159 [2024-07-15 11:38:35.464109] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:58.159 [2024-07-15 11:38:35.464119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.159 [2024-07-15 11:38:35.464129] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:58.159 [2024-07-15 11:38:35.464138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:58.159 [2024-07-15 11:38:35.464148] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b96230 is same with the state(5) to be set 00:19:58.159 [2024-07-15 11:38:35.466587] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:19:58.159 [2024-07-15 11:38:35.466728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:58.159 [2024-07-15 11:38:35.466752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd350 with addr=10.0.0.2, port=4420 00:19:58.159 [2024-07-15 11:38:35.466765] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd350 is same with the state(5) to be set 00:19:58.159 [2024-07-15 11:38:35.466785] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd350 (9): Bad file descriptor 00:19:58.159 [2024-07-15 11:38:35.466802] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:19:58.159 [2024-07-15 11:38:35.466812] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:19:58.159 [2024-07-15 11:38:35.466824] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:19:58.159 [2024-07-15 11:38:35.466841] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:58.159 [2024-07-15 11:38:35.473974] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b96230 (9): Bad file descriptor 00:19:58.159 [2024-07-15 11:38:35.476670] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:19:58.159 [2024-07-15 11:38:35.476859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:58.159 [2024-07-15 11:38:35.476886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd350 with addr=10.0.0.2, port=4420 00:19:58.159 [2024-07-15 11:38:35.476900] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd350 is same with the state(5) to be set 00:19:58.159 [2024-07-15 11:38:35.476922] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd350 (9): Bad file descriptor 00:19:58.159 [2024-07-15 11:38:35.476939] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:19:58.159 [2024-07-15 11:38:35.476948] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:19:58.159 [2024-07-15 11:38:35.476961] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:19:58.159 [2024-07-15 11:38:35.476977] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:58.159 [2024-07-15 11:38:35.483998] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:19:58.159 [2024-07-15 11:38:35.484195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:58.159 [2024-07-15 11:38:35.484220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b96230 with addr=10.0.0.3, port=4420 00:19:58.160 [2024-07-15 11:38:35.484234] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b96230 is same with the state(5) to be set 00:19:58.160 [2024-07-15 11:38:35.484256] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b96230 (9): Bad file descriptor 00:19:58.160 [2024-07-15 11:38:35.484272] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:19:58.160 [2024-07-15 11:38:35.484282] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:19:58.160 [2024-07-15 11:38:35.484293] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:19:58.160 [2024-07-15 11:38:35.484309] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:58.160 [2024-07-15 11:38:35.486758] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:19:58.160 [2024-07-15 11:38:35.486859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:58.160 [2024-07-15 11:38:35.486882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd350 with addr=10.0.0.2, port=4420 00:19:58.160 [2024-07-15 11:38:35.486894] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd350 is same with the state(5) to be set 00:19:58.160 [2024-07-15 11:38:35.486912] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd350 (9): Bad file descriptor 00:19:58.160 [2024-07-15 11:38:35.486927] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:19:58.160 [2024-07-15 11:38:35.486936] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:19:58.160 [2024-07-15 11:38:35.486947] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:19:58.160 [2024-07-15 11:38:35.486963] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:58.160 [2024-07-15 11:38:35.494103] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:19:58.160 [2024-07-15 11:38:35.494226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:58.160 [2024-07-15 11:38:35.494249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b96230 with addr=10.0.0.3, port=4420 00:19:58.160 [2024-07-15 11:38:35.494261] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b96230 is same with the state(5) to be set 00:19:58.160 [2024-07-15 11:38:35.494280] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b96230 (9): Bad file descriptor 00:19:58.160 [2024-07-15 11:38:35.494296] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:19:58.160 [2024-07-15 11:38:35.494305] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:19:58.160 [2024-07-15 11:38:35.494315] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:19:58.160 [2024-07-15 11:38:35.494331] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:58.160 [2024-07-15 11:38:35.496824] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:19:58.160 [2024-07-15 11:38:35.496929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:58.160 [2024-07-15 11:38:35.496952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd350 with addr=10.0.0.2, port=4420 00:19:58.160 [2024-07-15 11:38:35.496964] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd350 is same with the state(5) to be set 00:19:58.160 [2024-07-15 11:38:35.496982] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd350 (9): Bad file descriptor 00:19:58.160 [2024-07-15 11:38:35.496997] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:19:58.160 [2024-07-15 11:38:35.497006] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:19:58.160 [2024-07-15 11:38:35.497016] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:19:58.160 [2024-07-15 11:38:35.497032] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:58.160 [2024-07-15 11:38:35.504189] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:19:58.160 [2024-07-15 11:38:35.504307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:58.160 [2024-07-15 11:38:35.504331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b96230 with addr=10.0.0.3, port=4420 00:19:58.160 [2024-07-15 11:38:35.504344] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b96230 is same with the state(5) to be set 00:19:58.160 [2024-07-15 11:38:35.504362] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b96230 (9): Bad file descriptor 00:19:58.160 [2024-07-15 11:38:35.504377] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:19:58.160 [2024-07-15 11:38:35.504387] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:19:58.160 [2024-07-15 11:38:35.504398] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:19:58.160 [2024-07-15 11:38:35.504414] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:58.160 [2024-07-15 11:38:35.506891] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:19:58.160 [2024-07-15 11:38:35.506994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:58.160 [2024-07-15 11:38:35.507018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd350 with addr=10.0.0.2, port=4420 00:19:58.160 [2024-07-15 11:38:35.507029] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd350 is same with the state(5) to be set 00:19:58.160 [2024-07-15 11:38:35.507047] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd350 (9): Bad file descriptor 00:19:58.160 [2024-07-15 11:38:35.507063] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:19:58.160 [2024-07-15 11:38:35.507072] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:19:58.160 [2024-07-15 11:38:35.507082] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:19:58.160 [2024-07-15 11:38:35.507097] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:58.160 [2024-07-15 11:38:35.514274] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:19:58.160 [2024-07-15 11:38:35.514721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:58.160 [2024-07-15 11:38:35.514885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b96230 with addr=10.0.0.3, port=4420 00:19:58.160 [2024-07-15 11:38:35.515081] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b96230 is same with the state(5) to be set 00:19:58.160 [2024-07-15 11:38:35.515302] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b96230 (9): Bad file descriptor 00:19:58.160 [2024-07-15 11:38:35.515340] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:19:58.160 [2024-07-15 11:38:35.515359] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:19:58.160 [2024-07-15 11:38:35.515373] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:19:58.160 [2024-07-15 11:38:35.515391] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:58.160 [2024-07-15 11:38:35.516965] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:19:58.160 [2024-07-15 11:38:35.517075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:58.160 [2024-07-15 11:38:35.517098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd350 with addr=10.0.0.2, port=4420 00:19:58.160 [2024-07-15 11:38:35.517110] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd350 is same with the state(5) to be set 00:19:58.160 [2024-07-15 11:38:35.517128] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd350 (9): Bad file descriptor 00:19:58.160 [2024-07-15 11:38:35.517143] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:19:58.160 [2024-07-15 11:38:35.517153] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:19:58.160 [2024-07-15 11:38:35.517163] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:19:58.160 [2024-07-15 11:38:35.517179] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:58.160 [2024-07-15 11:38:35.524627] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:19:58.160 [2024-07-15 11:38:35.524777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:58.160 [2024-07-15 11:38:35.524802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b96230 with addr=10.0.0.3, port=4420 00:19:58.160 [2024-07-15 11:38:35.524813] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b96230 is same with the state(5) to be set 00:19:58.160 [2024-07-15 11:38:35.524844] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b96230 (9): Bad file descriptor 00:19:58.160 [2024-07-15 11:38:35.524861] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:19:58.160 [2024-07-15 11:38:35.524871] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:19:58.160 [2024-07-15 11:38:35.524881] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:19:58.160 [2024-07-15 11:38:35.524897] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:58.160 [2024-07-15 11:38:35.527035] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:19:58.160 [2024-07-15 11:38:35.527142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:58.160 [2024-07-15 11:38:35.527165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd350 with addr=10.0.0.2, port=4420 00:19:58.160 [2024-07-15 11:38:35.527176] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd350 is same with the state(5) to be set 00:19:58.160 [2024-07-15 11:38:35.527194] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd350 (9): Bad file descriptor 00:19:58.160 [2024-07-15 11:38:35.527209] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:19:58.160 [2024-07-15 11:38:35.527218] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:19:58.160 [2024-07-15 11:38:35.527228] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:19:58.160 [2024-07-15 11:38:35.527244] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:58.160 [2024-07-15 11:38:35.534718] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:19:58.160 [2024-07-15 11:38:35.534889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:58.160 [2024-07-15 11:38:35.534913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b96230 with addr=10.0.0.3, port=4420 00:19:58.160 [2024-07-15 11:38:35.534926] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b96230 is same with the state(5) to be set 00:19:58.160 [2024-07-15 11:38:35.534947] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b96230 (9): Bad file descriptor 00:19:58.161 [2024-07-15 11:38:35.534964] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:19:58.161 [2024-07-15 11:38:35.534974] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:19:58.161 [2024-07-15 11:38:35.534985] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:19:58.161 [2024-07-15 11:38:35.535001] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:58.161 [2024-07-15 11:38:35.537101] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:19:58.161 [2024-07-15 11:38:35.537199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:58.161 [2024-07-15 11:38:35.537221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd350 with addr=10.0.0.2, port=4420 00:19:58.161 [2024-07-15 11:38:35.537232] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd350 is same with the state(5) to be set 00:19:58.161 [2024-07-15 11:38:35.537250] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd350 (9): Bad file descriptor 00:19:58.161 [2024-07-15 11:38:35.537265] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:19:58.161 [2024-07-15 11:38:35.537274] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:19:58.161 [2024-07-15 11:38:35.537284] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:19:58.161 [2024-07-15 11:38:35.537300] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:58.161 [2024-07-15 11:38:35.544820] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:19:58.161 [2024-07-15 11:38:35.544968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:58.161 [2024-07-15 11:38:35.544994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b96230 with addr=10.0.0.3, port=4420 00:19:58.161 [2024-07-15 11:38:35.545006] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b96230 is same with the state(5) to be set 00:19:58.161 [2024-07-15 11:38:35.545026] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b96230 (9): Bad file descriptor 00:19:58.161 [2024-07-15 11:38:35.545042] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:19:58.161 [2024-07-15 11:38:35.545052] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:19:58.161 [2024-07-15 11:38:35.545063] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:19:58.161 [2024-07-15 11:38:35.545079] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:58.161 [2024-07-15 11:38:35.547163] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:19:58.161 [2024-07-15 11:38:35.547263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:58.161 [2024-07-15 11:38:35.547286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd350 with addr=10.0.0.2, port=4420 00:19:58.161 [2024-07-15 11:38:35.547298] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd350 is same with the state(5) to be set 00:19:58.161 [2024-07-15 11:38:35.547316] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd350 (9): Bad file descriptor 00:19:58.161 [2024-07-15 11:38:35.547332] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:19:58.161 [2024-07-15 11:38:35.547341] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:19:58.161 [2024-07-15 11:38:35.547351] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:19:58.161 [2024-07-15 11:38:35.547367] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:58.161 [2024-07-15 11:38:35.554913] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:19:58.161 [2024-07-15 11:38:35.555038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:58.161 [2024-07-15 11:38:35.555062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b96230 with addr=10.0.0.3, port=4420 00:19:58.161 [2024-07-15 11:38:35.555075] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b96230 is same with the state(5) to be set 00:19:58.161 [2024-07-15 11:38:35.555095] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b96230 (9): Bad file descriptor 00:19:58.161 [2024-07-15 11:38:35.555111] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:19:58.161 [2024-07-15 11:38:35.555121] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:19:58.161 [2024-07-15 11:38:35.555131] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:19:58.161 [2024-07-15 11:38:35.555148] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:58.161 [2024-07-15 11:38:35.557226] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:19:58.161 [2024-07-15 11:38:35.557321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:58.161 [2024-07-15 11:38:35.557344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd350 with addr=10.0.0.2, port=4420 00:19:58.161 [2024-07-15 11:38:35.557355] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd350 is same with the state(5) to be set 00:19:58.161 [2024-07-15 11:38:35.557374] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd350 (9): Bad file descriptor 00:19:58.161 [2024-07-15 11:38:35.557388] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:19:58.161 [2024-07-15 11:38:35.557398] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:19:58.161 [2024-07-15 11:38:35.557407] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:19:58.161 [2024-07-15 11:38:35.557423] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:58.161 [2024-07-15 11:38:35.564995] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:19:58.161 [2024-07-15 11:38:35.565157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:58.161 [2024-07-15 11:38:35.565181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b96230 with addr=10.0.0.3, port=4420 00:19:58.161 [2024-07-15 11:38:35.565194] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b96230 is same with the state(5) to be set 00:19:58.161 [2024-07-15 11:38:35.565215] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b96230 (9): Bad file descriptor 00:19:58.161 [2024-07-15 11:38:35.565231] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:19:58.161 [2024-07-15 11:38:35.565241] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:19:58.161 [2024-07-15 11:38:35.565252] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:19:58.161 [2024-07-15 11:38:35.565269] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:58.161 [2024-07-15 11:38:35.567288] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:19:58.161 [2024-07-15 11:38:35.567384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:58.161 [2024-07-15 11:38:35.567406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd350 with addr=10.0.0.2, port=4420 00:19:58.161 [2024-07-15 11:38:35.567418] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd350 is same with the state(5) to be set 00:19:58.161 [2024-07-15 11:38:35.567436] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd350 (9): Bad file descriptor 00:19:58.161 [2024-07-15 11:38:35.567450] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:19:58.161 [2024-07-15 11:38:35.567460] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:19:58.161 [2024-07-15 11:38:35.567469] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:19:58.161 [2024-07-15 11:38:35.567485] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:58.161 [2024-07-15 11:38:35.575089] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:19:58.161 [2024-07-15 11:38:35.575235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:58.161 [2024-07-15 11:38:35.575260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b96230 with addr=10.0.0.3, port=4420 00:19:58.161 [2024-07-15 11:38:35.575272] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b96230 is same with the state(5) to be set 00:19:58.161 [2024-07-15 11:38:35.575291] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b96230 (9): Bad file descriptor 00:19:58.161 [2024-07-15 11:38:35.575307] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:19:58.161 [2024-07-15 11:38:35.575316] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:19:58.161 [2024-07-15 11:38:35.575327] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:19:58.161 [2024-07-15 11:38:35.575343] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:58.161 [2024-07-15 11:38:35.577349] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:19:58.161 [2024-07-15 11:38:35.577447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:58.161 [2024-07-15 11:38:35.577469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bdd350 with addr=10.0.0.2, port=4420 00:19:58.161 [2024-07-15 11:38:35.577480] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdd350 is same with the state(5) to be set 00:19:58.161 [2024-07-15 11:38:35.577498] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdd350 (9): Bad file descriptor 00:19:58.161 [2024-07-15 11:38:35.577513] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:19:58.161 [2024-07-15 11:38:35.577523] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:19:58.161 [2024-07-15 11:38:35.577532] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:19:58.161 [2024-07-15 11:38:35.577566] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:58.161 [2024-07-15 11:38:35.585178] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:19:58.161 [2024-07-15 11:38:35.585333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:58.161 [2024-07-15 11:38:35.585356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b96230 with addr=10.0.0.3, port=4420 00:19:58.161 [2024-07-15 11:38:35.585369] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b96230 is same with the state(5) to be set 00:19:58.161 [2024-07-15 11:38:35.585390] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b96230 (9): Bad file descriptor 00:19:58.161 [2024-07-15 11:38:35.585405] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:19:58.161 [2024-07-15 11:38:35.585415] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:19:58.161 [2024-07-15 11:38:35.585426] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:19:58.161 [2024-07-15 11:38:35.585443] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:58.161 [2024-07-15 11:38:35.585726] bdev_nvme.c:6770:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4420 not found 00:19:58.161 [2024-07-15 11:38:35.585749] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4421 found again 00:19:58.161 [2024-07-15 11:38:35.585787] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:19:58.161 [2024-07-15 11:38:35.586752] bdev_nvme.c:6770:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:19:58.162 [2024-07-15 11:38:35.586783] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:19:58.162 [2024-07-15 11:38:35.586805] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:19:58.418 [2024-07-15 11:38:35.671833] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4421 found again 00:19:58.418 [2024-07-15 11:38:35.672839] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:19:59.351 11:38:36 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@164 -- # get_subsystem_names 00:19:59.351 11:38:36 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:19:59.351 11:38:36 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:59.351 11:38:36 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:59.351 11:38:36 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # sort 00:19:59.351 11:38:36 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # jq -r '.[].name' 00:19:59.351 11:38:36 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # xargs 00:19:59.351 11:38:36 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:59.351 11:38:36 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@164 -- # [[ mdns0_nvme0 mdns1_nvme0 == \m\d\n\s\0\_\n\v\m\e\0\ \m\d\n\s\1\_\n\v\m\e\0 ]] 00:19:59.351 11:38:36 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@165 -- # get_bdev_list 00:19:59.351 11:38:36 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:59.351 11:38:36 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:59.351 11:38:36 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 00:19:59.351 11:38:36 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:59.351 11:38:36 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 00:19:59.351 11:38:36 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 00:19:59.351 11:38:36 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:59.351 11:38:36 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@165 -- # [[ mdns0_nvme0n1 mdns0_nvme0n2 mdns1_nvme0n1 mdns1_nvme0n2 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\0\_\n\v\m\e\0\n\2\ \m\d\n\s\1\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\2 ]] 00:19:59.351 11:38:36 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@166 -- # get_subsystem_paths mdns0_nvme0 00:19:59.351 11:38:36 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns0_nvme0 00:19:59.351 11:38:36 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:59.351 11:38:36 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:59.351 11:38:36 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # sort -n 00:19:59.351 11:38:36 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:19:59.351 11:38:36 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # xargs 00:19:59.351 11:38:36 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:59.351 11:38:36 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@166 -- # [[ 4421 == \4\4\2\1 ]] 00:19:59.351 11:38:36 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@167 -- # get_subsystem_paths mdns1_nvme0 00:19:59.351 11:38:36 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns1_nvme0 00:19:59.351 11:38:36 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:59.351 11:38:36 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:19:59.351 11:38:36 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:59.351 11:38:36 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # sort -n 00:19:59.351 11:38:36 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # xargs 00:19:59.351 11:38:36 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:59.351 11:38:36 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@167 -- # [[ 4421 == \4\4\2\1 ]] 00:19:59.351 11:38:36 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@168 -- # get_notification_count 00:19:59.351 11:38:36 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 4 00:19:59.351 11:38:36 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:59.351 11:38:36 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # jq '. | length' 00:19:59.351 11:38:36 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:59.351 11:38:36 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:59.351 11:38:36 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # notification_count=0 00:19:59.351 11:38:36 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@89 -- # notify_id=4 00:19:59.351 11:38:36 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@169 -- # [[ 0 == 0 ]] 00:19:59.351 11:38:36 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@171 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_mdns_discovery -b mdns 00:19:59.351 11:38:36 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:59.351 11:38:36 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:59.351 11:38:36 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:59.351 11:38:36 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@172 -- # sleep 1 00:19:59.351 [2024-07-15 11:38:36.768221] bdev_mdns_client.c: 424:bdev_nvme_avahi_iterate: *INFO*: Stopping avahi poller for service _nvme-disc._tcp 00:20:00.723 11:38:37 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@174 -- # get_mdns_discovery_svcs 00:20:00.723 11:38:37 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_mdns_discovery_info 00:20:00.723 11:38:37 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:00.723 11:38:37 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:00.723 11:38:37 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # jq -r '.[].name' 00:20:00.723 11:38:37 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # sort 00:20:00.723 11:38:37 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # xargs 00:20:00.723 11:38:37 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:00.723 11:38:37 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@174 -- # [[ '' == '' ]] 00:20:00.723 11:38:37 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@175 -- # get_subsystem_names 00:20:00.723 11:38:37 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:20:00.723 11:38:37 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:00.723 11:38:37 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # jq -r '.[].name' 00:20:00.723 11:38:37 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:00.723 11:38:37 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # sort 00:20:00.723 11:38:37 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # xargs 00:20:00.723 11:38:37 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:00.723 11:38:37 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@175 -- # [[ '' == '' ]] 00:20:00.723 11:38:37 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@176 -- # get_bdev_list 00:20:00.723 11:38:37 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:20:00.723 11:38:37 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:00.723 11:38:37 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:00.723 11:38:37 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 00:20:00.723 11:38:37 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 00:20:00.723 11:38:37 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 00:20:00.723 11:38:37 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:00.723 11:38:37 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@176 -- # [[ '' == '' ]] 00:20:00.723 11:38:37 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@177 -- # get_notification_count 00:20:00.723 11:38:37 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 4 00:20:00.723 11:38:37 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:00.723 11:38:37 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:00.723 11:38:37 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # jq '. | length' 00:20:00.723 11:38:37 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:00.723 11:38:37 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # notification_count=4 00:20:00.723 11:38:37 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@89 -- # notify_id=8 00:20:00.723 11:38:37 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@178 -- # [[ 4 == 4 ]] 00:20:00.723 11:38:37 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@181 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b mdns -s _nvme-disc._tcp -q nqn.2021-12.io.spdk:test 00:20:00.723 11:38:37 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:00.723 11:38:37 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:00.723 11:38:37 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:00.723 11:38:37 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@182 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b mdns -s _nvme-disc._http -q nqn.2021-12.io.spdk:test 00:20:00.723 11:38:37 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@648 -- # local es=0 00:20:00.723 11:38:37 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b mdns -s _nvme-disc._http -q nqn.2021-12.io.spdk:test 00:20:00.723 11:38:37 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:20:00.723 11:38:37 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:00.723 11:38:37 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:20:00.723 11:38:37 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:00.723 11:38:37 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b mdns -s _nvme-disc._http -q nqn.2021-12.io.spdk:test 00:20:00.723 11:38:37 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:00.723 11:38:37 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:00.723 [2024-07-15 11:38:37.982141] bdev_mdns_client.c: 470:bdev_nvme_start_mdns_discovery: *ERROR*: mDNS discovery already running with name mdns 00:20:00.723 2024/07/15 11:38:37 error on JSON-RPC call, method: bdev_nvme_start_mdns_discovery, params: map[hostnqn:nqn.2021-12.io.spdk:test name:mdns svcname:_nvme-disc._http], err: error received for bdev_nvme_start_mdns_discovery method, err: Code=-17 Msg=File exists 00:20:00.723 request: 00:20:00.723 { 00:20:00.723 "method": "bdev_nvme_start_mdns_discovery", 00:20:00.723 "params": { 00:20:00.723 "name": "mdns", 00:20:00.723 "svcname": "_nvme-disc._http", 00:20:00.723 "hostnqn": "nqn.2021-12.io.spdk:test" 00:20:00.723 } 00:20:00.723 } 00:20:00.723 Got JSON-RPC error response 00:20:00.723 GoRPCClient: error on JSON-RPC call 00:20:00.723 11:38:37 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:20:00.723 11:38:37 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@651 -- # es=1 00:20:00.723 11:38:37 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:20:00.723 11:38:37 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:20:00.723 11:38:37 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:20:00.723 11:38:37 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@183 -- # sleep 5 00:20:01.288 [2024-07-15 11:38:38.570857] bdev_mdns_client.c: 395:mdns_browse_handler: *INFO*: (Browser) CACHE_EXHAUSTED 00:20:01.288 [2024-07-15 11:38:38.670858] bdev_mdns_client.c: 395:mdns_browse_handler: *INFO*: (Browser) ALL_FOR_NOW 00:20:01.545 [2024-07-15 11:38:38.770880] bdev_mdns_client.c: 254:mdns_resolve_handler: *INFO*: Service 'spdk1' of type '_nvme-disc._tcp' in domain 'local' 00:20:01.545 [2024-07-15 11:38:38.770932] bdev_mdns_client.c: 259:mdns_resolve_handler: *INFO*: fedora38-cloud-1716830599-074-updated-1705279005.local:8009 (10.0.0.3) 00:20:01.545 TXT="nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:20:01.545 cookie is 0 00:20:01.545 is_local: 1 00:20:01.545 our_own: 0 00:20:01.545 wide_area: 0 00:20:01.545 multicast: 1 00:20:01.545 cached: 1 00:20:01.545 [2024-07-15 11:38:38.870867] bdev_mdns_client.c: 254:mdns_resolve_handler: *INFO*: Service 'spdk0' of type '_nvme-disc._tcp' in domain 'local' 00:20:01.545 [2024-07-15 11:38:38.870917] bdev_mdns_client.c: 259:mdns_resolve_handler: *INFO*: fedora38-cloud-1716830599-074-updated-1705279005.local:8009 (10.0.0.3) 00:20:01.545 TXT="nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:20:01.545 cookie is 0 00:20:01.545 is_local: 1 00:20:01.545 our_own: 0 00:20:01.545 wide_area: 0 00:20:01.545 multicast: 1 00:20:01.545 cached: 1 00:20:01.545 [2024-07-15 11:38:38.870933] bdev_mdns_client.c: 322:mdns_resolve_handler: *ERROR*: mDNS discovery entry exists already. trid->traddr: 10.0.0.3 trid->trsvcid: 8009 00:20:01.545 [2024-07-15 11:38:38.970872] bdev_mdns_client.c: 254:mdns_resolve_handler: *INFO*: Service 'spdk1' of type '_nvme-disc._tcp' in domain 'local' 00:20:01.545 [2024-07-15 11:38:38.970924] bdev_mdns_client.c: 259:mdns_resolve_handler: *INFO*: fedora38-cloud-1716830599-074-updated-1705279005.local:8009 (10.0.0.2) 00:20:01.545 TXT="nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:20:01.545 cookie is 0 00:20:01.545 is_local: 1 00:20:01.545 our_own: 0 00:20:01.545 wide_area: 0 00:20:01.545 multicast: 1 00:20:01.545 cached: 1 00:20:01.802 [2024-07-15 11:38:39.070868] bdev_mdns_client.c: 254:mdns_resolve_handler: *INFO*: Service 'spdk0' of type '_nvme-disc._tcp' in domain 'local' 00:20:01.802 [2024-07-15 11:38:39.070916] bdev_mdns_client.c: 259:mdns_resolve_handler: *INFO*: fedora38-cloud-1716830599-074-updated-1705279005.local:8009 (10.0.0.2) 00:20:01.802 TXT="nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:20:01.802 cookie is 0 00:20:01.802 is_local: 1 00:20:01.802 our_own: 0 00:20:01.802 wide_area: 0 00:20:01.802 multicast: 1 00:20:01.802 cached: 1 00:20:01.802 [2024-07-15 11:38:39.070932] bdev_mdns_client.c: 322:mdns_resolve_handler: *ERROR*: mDNS discovery entry exists already. trid->traddr: 10.0.0.2 trid->trsvcid: 8009 00:20:02.368 [2024-07-15 11:38:39.784406] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:20:02.368 [2024-07-15 11:38:39.784437] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:20:02.368 [2024-07-15 11:38:39.784470] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:20:02.627 [2024-07-15 11:38:39.872571] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4421 new subsystem mdns0_nvme0 00:20:02.627 [2024-07-15 11:38:39.939180] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach mdns0_nvme0 done 00:20:02.627 [2024-07-15 11:38:39.939230] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4421 found again 00:20:02.627 [2024-07-15 11:38:39.983817] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:20:02.627 [2024-07-15 11:38:39.983864] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:20:02.627 [2024-07-15 11:38:39.983886] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:20:02.627 [2024-07-15 11:38:40.069978] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem mdns1_nvme0 00:20:02.885 [2024-07-15 11:38:40.130265] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach mdns1_nvme0 done 00:20:02.885 [2024-07-15 11:38:40.130316] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:20:06.165 11:38:42 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@185 -- # get_mdns_discovery_svcs 00:20:06.165 11:38:42 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_mdns_discovery_info 00:20:06.165 11:38:42 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:06.165 11:38:42 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # jq -r '.[].name' 00:20:06.165 11:38:42 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # sort 00:20:06.165 11:38:42 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:06.165 11:38:42 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # xargs 00:20:06.165 11:38:43 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:06.165 11:38:43 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@185 -- # [[ mdns == \m\d\n\s ]] 00:20:06.165 11:38:43 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@186 -- # get_discovery_ctrlrs 00:20:06.165 11:38:43 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # jq -r '.[].name' 00:20:06.165 11:38:43 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:20:06.165 11:38:43 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # sort 00:20:06.165 11:38:43 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:06.165 11:38:43 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # xargs 00:20:06.165 11:38:43 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:06.165 11:38:43 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:06.165 11:38:43 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@186 -- # [[ mdns0_nvme mdns1_nvme == \m\d\n\s\0\_\n\v\m\e\ \m\d\n\s\1\_\n\v\m\e ]] 00:20:06.165 11:38:43 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@187 -- # get_bdev_list 00:20:06.165 11:38:43 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:20:06.165 11:38:43 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:06.165 11:38:43 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:06.165 11:38:43 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 00:20:06.165 11:38:43 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 00:20:06.165 11:38:43 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 00:20:06.165 11:38:43 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:06.165 11:38:43 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@187 -- # [[ mdns0_nvme0n1 mdns0_nvme0n2 mdns1_nvme0n1 mdns1_nvme0n2 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\0\_\n\v\m\e\0\n\2\ \m\d\n\s\1\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\2 ]] 00:20:06.165 11:38:43 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@190 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b cdc -s _nvme-disc._tcp -q nqn.2021-12.io.spdk:test 00:20:06.165 11:38:43 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@648 -- # local es=0 00:20:06.165 11:38:43 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b cdc -s _nvme-disc._tcp -q nqn.2021-12.io.spdk:test 00:20:06.165 11:38:43 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:20:06.165 11:38:43 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:06.165 11:38:43 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:20:06.165 11:38:43 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:06.165 11:38:43 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b cdc -s _nvme-disc._tcp -q nqn.2021-12.io.spdk:test 00:20:06.165 11:38:43 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:06.165 11:38:43 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:06.166 [2024-07-15 11:38:43.193627] bdev_mdns_client.c: 475:bdev_nvme_start_mdns_discovery: *ERROR*: mDNS discovery already running for service _nvme-disc._tcp 00:20:06.166 2024/07/15 11:38:43 error on JSON-RPC call, method: bdev_nvme_start_mdns_discovery, params: map[hostnqn:nqn.2021-12.io.spdk:test name:cdc svcname:_nvme-disc._tcp], err: error received for bdev_nvme_start_mdns_discovery method, err: Code=-17 Msg=File exists 00:20:06.166 request: 00:20:06.166 { 00:20:06.166 "method": "bdev_nvme_start_mdns_discovery", 00:20:06.166 "params": { 00:20:06.166 "name": "cdc", 00:20:06.166 "svcname": "_nvme-disc._tcp", 00:20:06.166 "hostnqn": "nqn.2021-12.io.spdk:test" 00:20:06.166 } 00:20:06.166 } 00:20:06.166 Got JSON-RPC error response 00:20:06.166 GoRPCClient: error on JSON-RPC call 00:20:06.166 11:38:43 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:20:06.166 11:38:43 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@651 -- # es=1 00:20:06.166 11:38:43 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:20:06.166 11:38:43 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:20:06.166 11:38:43 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:20:06.166 11:38:43 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@191 -- # get_discovery_ctrlrs 00:20:06.166 11:38:43 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:20:06.166 11:38:43 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # sort 00:20:06.166 11:38:43 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # jq -r '.[].name' 00:20:06.166 11:38:43 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:06.166 11:38:43 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:06.166 11:38:43 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # xargs 00:20:06.166 11:38:43 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:06.166 11:38:43 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@191 -- # [[ mdns0_nvme mdns1_nvme == \m\d\n\s\0\_\n\v\m\e\ \m\d\n\s\1\_\n\v\m\e ]] 00:20:06.166 11:38:43 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@192 -- # get_bdev_list 00:20:06.166 11:38:43 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 00:20:06.166 11:38:43 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:20:06.166 11:38:43 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:06.166 11:38:43 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:06.166 11:38:43 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 00:20:06.166 11:38:43 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 00:20:06.166 11:38:43 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:06.166 11:38:43 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@192 -- # [[ mdns0_nvme0n1 mdns0_nvme0n2 mdns1_nvme0n1 mdns1_nvme0n2 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\0\_\n\v\m\e\0\n\2\ \m\d\n\s\1\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\2 ]] 00:20:06.166 11:38:43 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@193 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_mdns_discovery -b mdns 00:20:06.166 11:38:43 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:06.166 11:38:43 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:06.166 11:38:43 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:06.166 11:38:43 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@195 -- # rpc_cmd nvmf_stop_mdns_prr 00:20:06.166 11:38:43 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:06.166 11:38:43 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:06.166 11:38:43 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:06.166 11:38:43 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@197 -- # trap - SIGINT SIGTERM EXIT 00:20:06.166 11:38:43 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@199 -- # kill 94167 00:20:06.166 11:38:43 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@202 -- # wait 94167 00:20:06.166 [2024-07-15 11:38:43.386951] bdev_mdns_client.c: 424:bdev_nvme_avahi_iterate: *INFO*: Stopping avahi poller for service _nvme-disc._tcp 00:20:06.166 11:38:43 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@203 -- # kill 94194 00:20:06.166 11:38:43 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@204 -- # nvmftestfini 00:20:06.166 11:38:43 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:06.166 11:38:43 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@117 -- # sync 00:20:06.166 Got SIGTERM, quitting. 00:20:06.166 Leaving mDNS multicast group on interface nvmf_tgt_if2.IPv4 with address 10.0.0.3. 00:20:06.166 Leaving mDNS multicast group on interface nvmf_tgt_if.IPv4 with address 10.0.0.2. 00:20:06.166 avahi-daemon 0.8 exiting. 00:20:06.166 11:38:43 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:06.166 11:38:43 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@120 -- # set +e 00:20:06.166 11:38:43 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:06.166 11:38:43 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:06.166 rmmod nvme_tcp 00:20:06.166 rmmod nvme_fabrics 00:20:06.166 rmmod nvme_keyring 00:20:06.166 11:38:43 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:06.166 11:38:43 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@124 -- # set -e 00:20:06.166 11:38:43 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@125 -- # return 0 00:20:06.166 11:38:43 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@489 -- # '[' -n 94117 ']' 00:20:06.166 11:38:43 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@490 -- # killprocess 94117 00:20:06.166 11:38:43 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@948 -- # '[' -z 94117 ']' 00:20:06.166 11:38:43 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@952 -- # kill -0 94117 00:20:06.166 11:38:43 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@953 -- # uname 00:20:06.166 11:38:43 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:06.166 11:38:43 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 94117 00:20:06.166 11:38:43 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:20:06.166 killing process with pid 94117 00:20:06.166 11:38:43 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:20:06.166 11:38:43 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@966 -- # echo 'killing process with pid 94117' 00:20:06.166 11:38:43 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@967 -- # kill 94117 00:20:06.166 11:38:43 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@972 -- # wait 94117 00:20:06.424 11:38:43 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:20:06.424 11:38:43 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:20:06.424 11:38:43 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:20:06.424 11:38:43 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:06.424 11:38:43 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:06.424 11:38:43 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:06.424 11:38:43 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:06.424 11:38:43 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:06.424 11:38:43 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:20:06.424 00:20:06.424 real 0m20.502s 00:20:06.424 user 0m40.266s 00:20:06.424 sys 0m1.957s 00:20:06.424 11:38:43 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:06.424 11:38:43 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:06.424 ************************************ 00:20:06.424 END TEST nvmf_mdns_discovery 00:20:06.424 ************************************ 00:20:06.424 11:38:43 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:20:06.424 11:38:43 nvmf_tcp -- nvmf/nvmf.sh@116 -- # [[ 1 -eq 1 ]] 00:20:06.424 11:38:43 nvmf_tcp -- nvmf/nvmf.sh@117 -- # run_test nvmf_host_multipath /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath.sh --transport=tcp 00:20:06.424 11:38:43 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:20:06.424 11:38:43 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:06.424 11:38:43 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:20:06.424 ************************************ 00:20:06.424 START TEST nvmf_host_multipath 00:20:06.424 ************************************ 00:20:06.424 11:38:43 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath.sh --transport=tcp 00:20:06.681 * Looking for test storage... 00:20:06.681 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:20:06.681 11:38:43 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:20:06.681 11:38:43 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@7 -- # uname -s 00:20:06.681 11:38:43 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:06.681 11:38:43 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:06.681 11:38:43 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:06.681 11:38:43 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:06.681 11:38:43 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:06.681 11:38:43 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:06.681 11:38:43 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:06.681 11:38:43 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:06.681 11:38:43 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:06.681 11:38:43 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:06.681 11:38:43 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:891080d4-f96c-4735-b9e2-e3ce9892e421 00:20:06.681 11:38:43 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=891080d4-f96c-4735-b9e2-e3ce9892e421 00:20:06.681 11:38:43 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:06.681 11:38:43 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:06.681 11:38:43 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:20:06.681 11:38:43 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:06.681 11:38:43 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:06.681 11:38:43 nvmf_tcp.nvmf_host_multipath -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:06.681 11:38:43 nvmf_tcp.nvmf_host_multipath -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:06.681 11:38:43 nvmf_tcp.nvmf_host_multipath -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:06.681 11:38:43 nvmf_tcp.nvmf_host_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:06.681 11:38:43 nvmf_tcp.nvmf_host_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:06.681 11:38:43 nvmf_tcp.nvmf_host_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:06.681 11:38:43 nvmf_tcp.nvmf_host_multipath -- paths/export.sh@5 -- # export PATH 00:20:06.681 11:38:43 nvmf_tcp.nvmf_host_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:06.681 11:38:43 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@47 -- # : 0 00:20:06.681 11:38:43 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:06.681 11:38:43 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:06.681 11:38:43 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:06.681 11:38:43 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:06.681 11:38:43 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:06.681 11:38:43 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:06.681 11:38:43 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:06.681 11:38:43 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:06.681 11:38:43 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:20:06.681 11:38:43 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:20:06.681 11:38:43 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:20:06.681 11:38:43 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@15 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:20:06.681 11:38:43 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:06.681 11:38:43 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:20:06.681 11:38:43 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@30 -- # nvmftestinit 00:20:06.681 11:38:43 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:20:06.681 11:38:43 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:06.681 11:38:43 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:06.681 11:38:43 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:06.681 11:38:43 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:06.681 11:38:43 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:06.682 11:38:43 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:06.682 11:38:43 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:06.682 11:38:43 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:20:06.682 11:38:43 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:20:06.682 11:38:43 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:20:06.682 11:38:43 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:20:06.682 11:38:43 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:20:06.682 11:38:43 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@432 -- # nvmf_veth_init 00:20:06.682 11:38:43 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:06.682 11:38:43 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:06.682 11:38:43 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:20:06.682 11:38:43 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:20:06.682 11:38:43 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:20:06.682 11:38:43 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:20:06.682 11:38:43 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:20:06.682 11:38:43 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:06.682 11:38:43 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:20:06.682 11:38:43 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:20:06.682 11:38:43 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:20:06.682 11:38:43 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:20:06.682 11:38:43 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:20:06.682 11:38:43 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:20:06.682 Cannot find device "nvmf_tgt_br" 00:20:06.682 11:38:43 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@155 -- # true 00:20:06.682 11:38:43 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:20:06.682 Cannot find device "nvmf_tgt_br2" 00:20:06.682 11:38:44 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@156 -- # true 00:20:06.682 11:38:44 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:20:06.682 11:38:44 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:20:06.682 Cannot find device "nvmf_tgt_br" 00:20:06.682 11:38:44 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@158 -- # true 00:20:06.682 11:38:44 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:20:06.682 Cannot find device "nvmf_tgt_br2" 00:20:06.682 11:38:44 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@159 -- # true 00:20:06.682 11:38:44 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:20:06.682 11:38:44 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:20:06.682 11:38:44 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:06.682 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:06.682 11:38:44 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@162 -- # true 00:20:06.682 11:38:44 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:06.682 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:06.682 11:38:44 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@163 -- # true 00:20:06.682 11:38:44 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:20:06.682 11:38:44 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:20:06.682 11:38:44 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:20:06.682 11:38:44 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:20:06.682 11:38:44 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:20:06.682 11:38:44 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:20:06.940 11:38:44 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:20:06.940 11:38:44 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:20:06.940 11:38:44 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:20:06.940 11:38:44 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:20:06.940 11:38:44 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:20:06.940 11:38:44 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:20:06.940 11:38:44 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:20:06.940 11:38:44 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:20:06.940 11:38:44 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:20:06.940 11:38:44 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:20:06.940 11:38:44 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:20:06.940 11:38:44 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:20:06.940 11:38:44 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:20:06.940 11:38:44 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:20:06.940 11:38:44 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:20:06.940 11:38:44 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:20:06.940 11:38:44 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:20:06.940 11:38:44 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:20:06.940 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:06.940 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.092 ms 00:20:06.940 00:20:06.940 --- 10.0.0.2 ping statistics --- 00:20:06.940 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:06.940 rtt min/avg/max/mdev = 0.092/0.092/0.092/0.000 ms 00:20:06.940 11:38:44 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:20:06.940 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:20:06.940 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.080 ms 00:20:06.940 00:20:06.940 --- 10.0.0.3 ping statistics --- 00:20:06.940 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:06.940 rtt min/avg/max/mdev = 0.080/0.080/0.080/0.000 ms 00:20:06.940 11:38:44 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:20:06.940 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:06.940 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.044 ms 00:20:06.940 00:20:06.940 --- 10.0.0.1 ping statistics --- 00:20:06.940 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:06.940 rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms 00:20:06.940 11:38:44 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:06.940 11:38:44 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@433 -- # return 0 00:20:06.940 11:38:44 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:20:06.940 11:38:44 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:06.940 11:38:44 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:20:06.940 11:38:44 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:20:06.940 11:38:44 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:06.940 11:38:44 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:20:06.940 11:38:44 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:20:06.940 11:38:44 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@32 -- # nvmfappstart -m 0x3 00:20:06.940 11:38:44 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:06.940 11:38:44 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:06.940 11:38:44 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:20:06.940 11:38:44 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@481 -- # nvmfpid=94751 00:20:06.940 11:38:44 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:20:06.940 11:38:44 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@482 -- # waitforlisten 94751 00:20:06.940 11:38:44 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@829 -- # '[' -z 94751 ']' 00:20:06.940 11:38:44 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:06.940 11:38:44 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:06.940 11:38:44 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:06.940 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:06.940 11:38:44 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:06.940 11:38:44 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:20:06.940 [2024-07-15 11:38:44.397188] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:20:06.940 [2024-07-15 11:38:44.397297] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:07.198 [2024-07-15 11:38:44.536815] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:20:07.198 [2024-07-15 11:38:44.609656] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:07.198 [2024-07-15 11:38:44.609730] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:07.198 [2024-07-15 11:38:44.609743] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:07.198 [2024-07-15 11:38:44.609753] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:07.198 [2024-07-15 11:38:44.609762] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:07.198 [2024-07-15 11:38:44.613594] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:07.198 [2024-07-15 11:38:44.613629] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:08.133 11:38:45 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:08.133 11:38:45 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@862 -- # return 0 00:20:08.133 11:38:45 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:08.133 11:38:45 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:08.133 11:38:45 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:20:08.133 11:38:45 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:08.133 11:38:45 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@33 -- # nvmfapp_pid=94751 00:20:08.133 11:38:45 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:20:08.391 [2024-07-15 11:38:45.781512] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:08.391 11:38:45 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:20:08.649 Malloc0 00:20:08.907 11:38:46 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:20:08.907 11:38:46 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@39 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:20:09.165 11:38:46 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:09.425 [2024-07-15 11:38:46.846036] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:09.425 11:38:46 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:20:09.684 [2024-07-15 11:38:47.086092] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:20:09.685 11:38:47 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@44 -- # bdevperf_pid=94855 00:20:09.685 11:38:47 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@43 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:20:09.685 11:38:47 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@46 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:09.685 11:38:47 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@47 -- # waitforlisten 94855 /var/tmp/bdevperf.sock 00:20:09.685 11:38:47 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@829 -- # '[' -z 94855 ']' 00:20:09.685 11:38:47 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:09.685 11:38:47 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:09.685 11:38:47 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:09.685 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:09.685 11:38:47 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:09.685 11:38:47 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:20:10.253 11:38:47 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:10.253 11:38:47 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@862 -- # return 0 00:20:10.253 11:38:47 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:20:10.253 11:38:47 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -l -1 -o 10 00:20:10.820 Nvme0n1 00:20:10.820 11:38:48 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:20:11.079 Nvme0n1 00:20:11.079 11:38:48 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@78 -- # sleep 1 00:20:11.079 11:38:48 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@76 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:20:12.456 11:38:49 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@81 -- # set_ANA_state non_optimized optimized 00:20:12.456 11:38:49 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:20:12.456 11:38:49 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:20:12.714 11:38:50 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@83 -- # confirm_io_on_port optimized 4421 00:20:12.714 11:38:50 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=94934 00:20:12.714 11:38:50 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 94751 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:20:12.714 11:38:50 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:20:19.270 11:38:56 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:20:19.270 11:38:56 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:20:19.270 11:38:56 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:20:19.270 11:38:56 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:20:19.270 Attaching 4 probes... 00:20:19.270 @path[10.0.0.2, 4421]: 15417 00:20:19.270 @path[10.0.0.2, 4421]: 17018 00:20:19.270 @path[10.0.0.2, 4421]: 17339 00:20:19.270 @path[10.0.0.2, 4421]: 15434 00:20:19.270 @path[10.0.0.2, 4421]: 16486 00:20:19.270 11:38:56 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:20:19.270 11:38:56 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:20:19.270 11:38:56 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:20:19.270 11:38:56 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:20:19.270 11:38:56 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:20:19.270 11:38:56 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:20:19.270 11:38:56 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 94934 00:20:19.270 11:38:56 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:20:19.270 11:38:56 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@86 -- # set_ANA_state non_optimized inaccessible 00:20:19.270 11:38:56 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:20:19.270 11:38:56 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:20:19.527 11:38:56 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@87 -- # confirm_io_on_port non_optimized 4420 00:20:19.527 11:38:56 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=95065 00:20:19.527 11:38:56 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:20:19.527 11:38:56 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 94751 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:20:26.086 11:39:02 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:20:26.086 11:39:02 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="non_optimized") | .address.trsvcid' 00:20:26.086 11:39:03 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4420 00:20:26.086 11:39:03 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:20:26.086 Attaching 4 probes... 00:20:26.086 @path[10.0.0.2, 4420]: 16720 00:20:26.086 @path[10.0.0.2, 4420]: 17131 00:20:26.086 @path[10.0.0.2, 4420]: 16729 00:20:26.086 @path[10.0.0.2, 4420]: 16540 00:20:26.086 @path[10.0.0.2, 4420]: 16704 00:20:26.086 11:39:03 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:20:26.086 11:39:03 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:20:26.086 11:39:03 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:20:26.086 11:39:03 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4420 00:20:26.086 11:39:03 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4420 == \4\4\2\0 ]] 00:20:26.086 11:39:03 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4420 == \4\4\2\0 ]] 00:20:26.086 11:39:03 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 95065 00:20:26.086 11:39:03 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:20:26.086 11:39:03 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@89 -- # set_ANA_state inaccessible optimized 00:20:26.086 11:39:03 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:20:26.343 11:39:03 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:20:26.601 11:39:03 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@90 -- # confirm_io_on_port optimized 4421 00:20:26.601 11:39:03 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=95196 00:20:26.601 11:39:03 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 94751 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:20:26.601 11:39:03 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:20:33.190 11:39:09 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:20:33.190 11:39:09 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:20:33.190 11:39:10 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:20:33.190 11:39:10 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:20:33.190 Attaching 4 probes... 00:20:33.190 @path[10.0.0.2, 4421]: 14999 00:20:33.190 @path[10.0.0.2, 4421]: 16768 00:20:33.190 @path[10.0.0.2, 4421]: 16712 00:20:33.190 @path[10.0.0.2, 4421]: 16906 00:20:33.190 @path[10.0.0.2, 4421]: 16361 00:20:33.190 11:39:10 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:20:33.190 11:39:10 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:20:33.190 11:39:10 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:20:33.190 11:39:10 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:20:33.190 11:39:10 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:20:33.190 11:39:10 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:20:33.190 11:39:10 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 95196 00:20:33.190 11:39:10 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:20:33.190 11:39:10 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@93 -- # set_ANA_state inaccessible inaccessible 00:20:33.190 11:39:10 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:20:33.190 11:39:10 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:20:33.448 11:39:10 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@94 -- # confirm_io_on_port '' '' 00:20:33.448 11:39:10 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=95321 00:20:33.448 11:39:10 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 94751 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:20:33.448 11:39:10 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:20:40.002 11:39:16 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:20:40.002 11:39:16 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="") | .address.trsvcid' 00:20:40.002 11:39:17 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port= 00:20:40.002 11:39:17 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:20:40.002 Attaching 4 probes... 00:20:40.002 00:20:40.002 00:20:40.002 00:20:40.002 00:20:40.002 00:20:40.002 11:39:17 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:20:40.002 11:39:17 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:20:40.002 11:39:17 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:20:40.002 11:39:17 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # port= 00:20:40.002 11:39:17 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ '' == '' ]] 00:20:40.002 11:39:17 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ '' == '' ]] 00:20:40.002 11:39:17 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 95321 00:20:40.002 11:39:17 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:20:40.002 11:39:17 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@96 -- # set_ANA_state non_optimized optimized 00:20:40.002 11:39:17 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:20:40.002 11:39:17 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:20:40.565 11:39:17 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@97 -- # confirm_io_on_port optimized 4421 00:20:40.565 11:39:17 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=95457 00:20:40.565 11:39:17 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 94751 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:20:40.565 11:39:17 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:20:47.121 11:39:23 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:20:47.121 11:39:23 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:20:47.121 11:39:24 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:20:47.121 11:39:24 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:20:47.121 Attaching 4 probes... 00:20:47.121 @path[10.0.0.2, 4421]: 14268 00:20:47.121 @path[10.0.0.2, 4421]: 16369 00:20:47.121 @path[10.0.0.2, 4421]: 16508 00:20:47.121 @path[10.0.0.2, 4421]: 16338 00:20:47.121 @path[10.0.0.2, 4421]: 16580 00:20:47.121 11:39:24 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:20:47.121 11:39:24 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:20:47.121 11:39:24 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:20:47.121 11:39:24 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:20:47.121 11:39:24 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:20:47.121 11:39:24 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:20:47.121 11:39:24 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 95457 00:20:47.121 11:39:24 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:20:47.121 11:39:24 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:20:47.121 [2024-07-15 11:39:24.346490] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x139e310 is same with the state(5) to be set 00:20:47.121 [2024-07-15 11:39:24.346565] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x139e310 is same with the state(5) to be set 00:20:47.121 [2024-07-15 11:39:24.346585] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x139e310 is same with the state(5) to be set 00:20:47.121 [2024-07-15 11:39:24.346595] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x139e310 is same with the state(5) to be set 00:20:47.121 [2024-07-15 11:39:24.346603] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x139e310 is same with the state(5) to be set 00:20:47.121 [2024-07-15 11:39:24.346611] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x139e310 is same with the state(5) to be set 00:20:47.121 [2024-07-15 11:39:24.346620] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x139e310 is same with the state(5) to be set 00:20:47.121 [2024-07-15 11:39:24.346628] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x139e310 is same with the state(5) to be set 00:20:47.121 [2024-07-15 11:39:24.346636] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x139e310 is same with the state(5) to be set 00:20:47.121 [2024-07-15 11:39:24.346644] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x139e310 is same with the state(5) to be set 00:20:47.121 [2024-07-15 11:39:24.346652] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x139e310 is same with the state(5) to be set 00:20:47.121 [2024-07-15 11:39:24.346660] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x139e310 is same with the state(5) to be set 00:20:47.121 [2024-07-15 11:39:24.346669] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x139e310 is same with the state(5) to be set 00:20:47.121 [2024-07-15 11:39:24.346676] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x139e310 is same with the state(5) to be set 00:20:47.121 [2024-07-15 11:39:24.346685] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x139e310 is same with the state(5) to be set 00:20:47.121 [2024-07-15 11:39:24.346693] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x139e310 is same with the state(5) to be set 00:20:47.121 [2024-07-15 11:39:24.346701] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x139e310 is same with the state(5) to be set 00:20:47.121 [2024-07-15 11:39:24.346712] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x139e310 is same with the state(5) to be set 00:20:47.121 [2024-07-15 11:39:24.346724] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x139e310 is same with the state(5) to be set 00:20:47.121 [2024-07-15 11:39:24.346736] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x139e310 is same with the state(5) to be set 00:20:47.121 [2024-07-15 11:39:24.346748] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x139e310 is same with the state(5) to be set 00:20:47.121 [2024-07-15 11:39:24.346760] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x139e310 is same with the state(5) to be set 00:20:47.121 [2024-07-15 11:39:24.346773] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x139e310 is same with the state(5) to be set 00:20:47.121 [2024-07-15 11:39:24.346787] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x139e310 is same with the state(5) to be set 00:20:47.121 [2024-07-15 11:39:24.346799] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x139e310 is same with the state(5) to be set 00:20:47.121 [2024-07-15 11:39:24.346811] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x139e310 is same with the state(5) to be set 00:20:47.121 [2024-07-15 11:39:24.346825] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x139e310 is same with the state(5) to be set 00:20:47.121 [2024-07-15 11:39:24.346838] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x139e310 is same with the state(5) to be set 00:20:47.121 [2024-07-15 11:39:24.346846] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x139e310 is same with the state(5) to be set 00:20:47.121 [2024-07-15 11:39:24.346854] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x139e310 is same with the state(5) to be set 00:20:47.121 [2024-07-15 11:39:24.346862] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x139e310 is same with the state(5) to be set 00:20:47.121 [2024-07-15 11:39:24.346875] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x139e310 is same with the state(5) to be set 00:20:47.121 [2024-07-15 11:39:24.346890] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x139e310 is same with the state(5) to be set 00:20:47.121 [2024-07-15 11:39:24.346905] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x139e310 is same with the state(5) to be set 00:20:47.121 [2024-07-15 11:39:24.346918] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x139e310 is same with the state(5) to be set 00:20:47.121 [2024-07-15 11:39:24.346927] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x139e310 is same with the state(5) to be set 00:20:47.121 [2024-07-15 11:39:24.346936] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x139e310 is same with the state(5) to be set 00:20:47.121 [2024-07-15 11:39:24.346944] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x139e310 is same with the state(5) to be set 00:20:47.121 [2024-07-15 11:39:24.346952] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x139e310 is same with the state(5) to be set 00:20:47.121 [2024-07-15 11:39:24.346960] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x139e310 is same with the state(5) to be set 00:20:47.121 [2024-07-15 11:39:24.346968] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x139e310 is same with the state(5) to be set 00:20:47.121 [2024-07-15 11:39:24.346980] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x139e310 is same with the state(5) to be set 00:20:47.121 [2024-07-15 11:39:24.346992] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x139e310 is same with the state(5) to be set 00:20:47.121 [2024-07-15 11:39:24.347006] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x139e310 is same with the state(5) to be set 00:20:47.121 [2024-07-15 11:39:24.347020] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x139e310 is same with the state(5) to be set 00:20:47.121 [2024-07-15 11:39:24.347035] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x139e310 is same with the state(5) to be set 00:20:47.121 [2024-07-15 11:39:24.347049] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x139e310 is same with the state(5) to be set 00:20:47.121 [2024-07-15 11:39:24.347058] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x139e310 is same with the state(5) to be set 00:20:47.121 [2024-07-15 11:39:24.347067] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x139e310 is same with the state(5) to be set 00:20:47.121 [2024-07-15 11:39:24.347075] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x139e310 is same with the state(5) to be set 00:20:47.121 [2024-07-15 11:39:24.347083] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x139e310 is same with the state(5) to be set 00:20:47.121 [2024-07-15 11:39:24.347091] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x139e310 is same with the state(5) to be set 00:20:47.121 [2024-07-15 11:39:24.347099] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x139e310 is same with the state(5) to be set 00:20:47.121 [2024-07-15 11:39:24.347110] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x139e310 is same with the state(5) to be set 00:20:47.121 [2024-07-15 11:39:24.347123] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x139e310 is same with the state(5) to be set 00:20:47.121 [2024-07-15 11:39:24.347136] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x139e310 is same with the state(5) to be set 00:20:47.121 [2024-07-15 11:39:24.347150] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x139e310 is same with the state(5) to be set 00:20:47.121 [2024-07-15 11:39:24.347164] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x139e310 is same with the state(5) to be set 00:20:47.121 [2024-07-15 11:39:24.347175] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x139e310 is same with the state(5) to be set 00:20:47.121 [2024-07-15 11:39:24.347188] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x139e310 is same with the state(5) to be set 00:20:47.121 [2024-07-15 11:39:24.347201] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x139e310 is same with the state(5) to be set 00:20:47.121 [2024-07-15 11:39:24.347210] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x139e310 is same with the state(5) to be set 00:20:47.121 [2024-07-15 11:39:24.347218] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x139e310 is same with the state(5) to be set 00:20:47.121 [2024-07-15 11:39:24.347227] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x139e310 is same with the state(5) to be set 00:20:47.121 [2024-07-15 11:39:24.347235] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x139e310 is same with the state(5) to be set 00:20:47.121 [2024-07-15 11:39:24.347243] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x139e310 is same with the state(5) to be set 00:20:47.121 [2024-07-15 11:39:24.347251] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x139e310 is same with the state(5) to be set 00:20:47.121 [2024-07-15 11:39:24.347262] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x139e310 is same with the state(5) to be set 00:20:47.121 [2024-07-15 11:39:24.347275] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x139e310 is same with the state(5) to be set 00:20:47.121 [2024-07-15 11:39:24.347289] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x139e310 is same with the state(5) to be set 00:20:47.121 11:39:24 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@101 -- # sleep 1 00:20:48.054 11:39:25 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@104 -- # confirm_io_on_port non_optimized 4420 00:20:48.054 11:39:25 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=95593 00:20:48.054 11:39:25 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 94751 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:20:48.054 11:39:25 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:20:54.657 11:39:31 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:20:54.657 11:39:31 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="non_optimized") | .address.trsvcid' 00:20:54.657 11:39:31 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4420 00:20:54.657 11:39:31 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:20:54.657 Attaching 4 probes... 00:20:54.657 @path[10.0.0.2, 4420]: 15571 00:20:54.657 @path[10.0.0.2, 4420]: 16427 00:20:54.657 @path[10.0.0.2, 4420]: 16374 00:20:54.657 @path[10.0.0.2, 4420]: 15684 00:20:54.657 @path[10.0.0.2, 4420]: 16464 00:20:54.657 11:39:31 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:20:54.657 11:39:31 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:20:54.657 11:39:31 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:20:54.657 11:39:31 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4420 00:20:54.657 11:39:31 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4420 == \4\4\2\0 ]] 00:20:54.657 11:39:31 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4420 == \4\4\2\0 ]] 00:20:54.657 11:39:31 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 95593 00:20:54.657 11:39:31 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:20:54.657 11:39:31 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:20:54.657 [2024-07-15 11:39:31.962788] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:20:54.657 11:39:31 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@108 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:20:54.914 11:39:32 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@111 -- # sleep 6 00:21:01.468 11:39:38 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@112 -- # confirm_io_on_port optimized 4421 00:21:01.468 11:39:38 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=95789 00:21:01.468 11:39:38 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 94751 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:21:01.468 11:39:38 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:21:08.039 11:39:44 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:21:08.039 11:39:44 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:21:08.039 11:39:44 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:21:08.039 11:39:44 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:21:08.039 Attaching 4 probes... 00:21:08.039 @path[10.0.0.2, 4421]: 15928 00:21:08.039 @path[10.0.0.2, 4421]: 16110 00:21:08.039 @path[10.0.0.2, 4421]: 16099 00:21:08.039 @path[10.0.0.2, 4421]: 15296 00:21:08.039 @path[10.0.0.2, 4421]: 16701 00:21:08.039 11:39:44 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:21:08.039 11:39:44 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:21:08.039 11:39:44 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:21:08.039 11:39:44 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:21:08.039 11:39:44 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:21:08.039 11:39:44 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:21:08.039 11:39:44 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 95789 00:21:08.039 11:39:44 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:21:08.039 11:39:44 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@114 -- # killprocess 94855 00:21:08.039 11:39:44 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@948 -- # '[' -z 94855 ']' 00:21:08.039 11:39:44 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@952 -- # kill -0 94855 00:21:08.039 11:39:44 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@953 -- # uname 00:21:08.039 11:39:44 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:08.039 11:39:44 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 94855 00:21:08.039 11:39:44 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:21:08.039 11:39:44 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:21:08.039 killing process with pid 94855 00:21:08.039 11:39:44 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@966 -- # echo 'killing process with pid 94855' 00:21:08.039 11:39:44 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@967 -- # kill 94855 00:21:08.039 11:39:44 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@972 -- # wait 94855 00:21:08.039 Connection closed with partial response: 00:21:08.039 00:21:08.039 00:21:08.039 11:39:44 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@116 -- # wait 94855 00:21:08.039 11:39:44 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@118 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:21:08.039 [2024-07-15 11:38:47.156230] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:21:08.039 [2024-07-15 11:38:47.156345] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid94855 ] 00:21:08.039 [2024-07-15 11:38:47.288269] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:08.039 [2024-07-15 11:38:47.350437] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:08.039 Running I/O for 90 seconds... 00:21:08.039 [2024-07-15 11:38:56.886066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:8824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:08.039 [2024-07-15 11:38:56.886152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:21:08.039 [2024-07-15 11:38:56.886217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:8832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:08.039 [2024-07-15 11:38:56.886239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:21:08.039 [2024-07-15 11:38:56.886264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.039 [2024-07-15 11:38:56.886280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:21:08.039 [2024-07-15 11:38:56.886302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.039 [2024-07-15 11:38:56.886316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:21:08.039 [2024-07-15 11:38:56.886338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.039 [2024-07-15 11:38:56.886352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:21:08.039 [2024-07-15 11:38:56.886374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:8216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.039 [2024-07-15 11:38:56.886388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:21:08.039 [2024-07-15 11:38:56.886409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:8224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.039 [2024-07-15 11:38:56.886424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:21:08.039 [2024-07-15 11:38:56.886446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:8232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.039 [2024-07-15 11:38:56.886460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:21:08.039 [2024-07-15 11:38:56.886482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:8240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.039 [2024-07-15 11:38:56.886496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:21:08.039 [2024-07-15 11:38:56.886517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:8248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.039 [2024-07-15 11:38:56.886531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:21:08.039 [2024-07-15 11:38:56.886570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:8256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.039 [2024-07-15 11:38:56.886607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:21:08.039 [2024-07-15 11:38:56.886632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:8264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.039 [2024-07-15 11:38:56.886647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:21:08.039 [2024-07-15 11:38:56.886668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:8272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.039 [2024-07-15 11:38:56.886683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:21:08.039 [2024-07-15 11:38:56.886704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:8280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.039 [2024-07-15 11:38:56.886719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:08.039 [2024-07-15 11:38:56.886740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:8288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.039 [2024-07-15 11:38:56.886755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:08.039 [2024-07-15 11:38:56.886777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:8296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.039 [2024-07-15 11:38:56.886792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:21:08.039 [2024-07-15 11:38:56.886814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:8304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.039 [2024-07-15 11:38:56.886829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:21:08.039 [2024-07-15 11:38:56.886851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:8312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.039 [2024-07-15 11:38:56.886867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:21:08.039 [2024-07-15 11:38:56.886888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:8320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.039 [2024-07-15 11:38:56.886902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:21:08.039 [2024-07-15 11:38:56.886924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:8328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.039 [2024-07-15 11:38:56.886938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:21:08.039 [2024-07-15 11:38:56.886959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:8336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.039 [2024-07-15 11:38:56.886974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:21:08.039 [2024-07-15 11:38:56.886996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:8344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.039 [2024-07-15 11:38:56.887010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:21:08.039 [2024-07-15 11:38:56.887032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:8352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.039 [2024-07-15 11:38:56.887046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:21:08.039 [2024-07-15 11:38:56.887077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:8360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.039 [2024-07-15 11:38:56.887093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:21:08.039 [2024-07-15 11:38:56.887114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:8368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.039 [2024-07-15 11:38:56.887129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:21:08.039 [2024-07-15 11:38:56.887151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:8376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.039 [2024-07-15 11:38:56.887166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:21:08.039 [2024-07-15 11:38:56.887188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:8384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.039 [2024-07-15 11:38:56.887202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:21:08.039 [2024-07-15 11:38:56.887224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:8392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.039 [2024-07-15 11:38:56.887238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:21:08.039 [2024-07-15 11:38:56.887260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.039 [2024-07-15 11:38:56.887275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:21:08.039 [2024-07-15 11:38:56.887297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:8408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.039 [2024-07-15 11:38:56.887311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:21:08.039 [2024-07-15 11:38:56.887333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:8416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.039 [2024-07-15 11:38:56.887348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:21:08.040 [2024-07-15 11:38:56.887370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:8424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.040 [2024-07-15 11:38:56.887384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:21:08.040 [2024-07-15 11:38:56.887406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:8432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.040 [2024-07-15 11:38:56.887420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:21:08.040 [2024-07-15 11:38:56.887442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:8440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.040 [2024-07-15 11:38:56.887458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:21:08.040 [2024-07-15 11:38:56.887480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:8448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.040 [2024-07-15 11:38:56.887494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:21:08.040 [2024-07-15 11:38:56.887523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:8456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.040 [2024-07-15 11:38:56.887539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:21:08.040 [2024-07-15 11:38:56.887584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:8464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.040 [2024-07-15 11:38:56.887601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:21:08.040 [2024-07-15 11:38:56.887623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:8472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.040 [2024-07-15 11:38:56.887638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:21:08.040 [2024-07-15 11:38:56.887660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:8480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.040 [2024-07-15 11:38:56.887675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:21:08.040 [2024-07-15 11:38:56.887697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:8488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.040 [2024-07-15 11:38:56.887711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:21:08.040 [2024-07-15 11:38:56.887732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:8496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.040 [2024-07-15 11:38:56.887746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:21:08.040 [2024-07-15 11:38:56.887768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:8504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.040 [2024-07-15 11:38:56.887783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:21:08.040 [2024-07-15 11:38:56.887815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:8512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.040 [2024-07-15 11:38:56.887829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:21:08.040 [2024-07-15 11:38:56.887850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:8520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.040 [2024-07-15 11:38:56.887865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:21:08.040 [2024-07-15 11:38:56.887887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:8528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.040 [2024-07-15 11:38:56.887901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:21:08.040 [2024-07-15 11:38:56.887923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:8536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.040 [2024-07-15 11:38:56.887937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:08.040 [2024-07-15 11:38:56.887959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:8544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.040 [2024-07-15 11:38:56.887973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:08.040 [2024-07-15 11:38:56.887995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:8552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.040 [2024-07-15 11:38:56.888017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:21:08.040 [2024-07-15 11:38:56.888040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:8560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.040 [2024-07-15 11:38:56.888055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:21:08.040 [2024-07-15 11:38:56.888077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:8568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.040 [2024-07-15 11:38:56.888092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:21:08.040 [2024-07-15 11:38:56.888114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:8576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.040 [2024-07-15 11:38:56.888128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:21:08.040 [2024-07-15 11:38:56.888150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:8584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.040 [2024-07-15 11:38:56.888165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:21:08.040 [2024-07-15 11:38:56.888187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:8592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.040 [2024-07-15 11:38:56.888202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:21:08.040 [2024-07-15 11:38:56.888223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:8600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.040 [2024-07-15 11:38:56.888238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:21:08.040 [2024-07-15 11:38:56.888260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:8608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.040 [2024-07-15 11:38:56.888275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:21:08.040 [2024-07-15 11:38:56.888304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:8616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.040 [2024-07-15 11:38:56.888319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:21:08.040 [2024-07-15 11:38:56.888340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.040 [2024-07-15 11:38:56.888354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:21:08.040 [2024-07-15 11:38:56.888377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:8632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.040 [2024-07-15 11:38:56.888391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:21:08.040 [2024-07-15 11:38:56.888414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:8640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.040 [2024-07-15 11:38:56.888428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:21:08.040 [2024-07-15 11:38:56.888450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:8648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.040 [2024-07-15 11:38:56.888470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:21:08.040 [2024-07-15 11:38:56.888493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:8656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.040 [2024-07-15 11:38:56.888508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:21:08.040 [2024-07-15 11:38:56.888530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:8664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.040 [2024-07-15 11:38:56.888572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:21:08.040 [2024-07-15 11:38:56.888601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:8672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.040 [2024-07-15 11:38:56.888616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:21:08.040 [2024-07-15 11:38:56.888639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:8680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.040 [2024-07-15 11:38:56.888653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:21:08.040 [2024-07-15 11:38:56.889349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:8688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.040 [2024-07-15 11:38:56.889378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:21:08.040 [2024-07-15 11:38:56.889407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:8696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.040 [2024-07-15 11:38:56.889424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:21:08.040 [2024-07-15 11:38:56.889446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:8704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.040 [2024-07-15 11:38:56.889460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:21:08.040 [2024-07-15 11:38:56.889482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:8712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.040 [2024-07-15 11:38:56.889497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:21:08.040 [2024-07-15 11:38:56.889519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:8720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.040 [2024-07-15 11:38:56.889533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:21:08.040 [2024-07-15 11:38:56.889569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:8728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.040 [2024-07-15 11:38:56.889586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:21:08.040 [2024-07-15 11:38:56.889608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:8736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.040 [2024-07-15 11:38:56.889623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:21:08.040 [2024-07-15 11:38:56.889646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:8744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.040 [2024-07-15 11:38:56.889661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:21:08.040 [2024-07-15 11:38:56.889695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:8752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.040 [2024-07-15 11:38:56.889711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:21:08.040 [2024-07-15 11:38:56.889734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:8840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:08.040 [2024-07-15 11:38:56.889749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:21:08.040 [2024-07-15 11:38:56.889771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:8848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:08.040 [2024-07-15 11:38:56.889785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:21:08.040 [2024-07-15 11:38:56.889807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:8856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:08.040 [2024-07-15 11:38:56.889821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:21:08.040 [2024-07-15 11:38:56.889843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:8864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:08.040 [2024-07-15 11:38:56.889857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:21:08.040 [2024-07-15 11:38:56.889879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:8872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:08.040 [2024-07-15 11:38:56.889893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:08.040 [2024-07-15 11:38:56.889933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:8880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:08.040 [2024-07-15 11:38:56.889949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:08.040 [2024-07-15 11:38:56.889970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:8888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:08.040 [2024-07-15 11:38:56.889984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:21:08.040 [2024-07-15 11:38:56.890006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:8896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:08.040 [2024-07-15 11:38:56.890021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:21:08.040 [2024-07-15 11:38:56.890043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:8904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:08.040 [2024-07-15 11:38:56.890057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:21:08.040 [2024-07-15 11:38:56.890079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:8912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:08.040 [2024-07-15 11:38:56.890094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:21:08.040 [2024-07-15 11:38:56.890116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:8920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:08.040 [2024-07-15 11:38:56.890130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:21:08.040 [2024-07-15 11:38:56.890160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:8928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:08.040 [2024-07-15 11:38:56.890175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:21:08.040 [2024-07-15 11:38:56.890198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:8936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:08.040 [2024-07-15 11:38:56.890212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:21:08.040 [2024-07-15 11:38:56.890234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:8944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:08.040 [2024-07-15 11:38:56.890248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:21:08.040 [2024-07-15 11:38:56.890270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:8952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:08.040 [2024-07-15 11:38:56.890284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:21:08.040 [2024-07-15 11:38:56.890306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:8960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:08.040 [2024-07-15 11:38:56.890320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:21:08.040 [2024-07-15 11:38:56.890347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:8968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:08.040 [2024-07-15 11:38:56.890362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:21:08.040 [2024-07-15 11:38:56.890383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:8976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:08.040 [2024-07-15 11:38:56.890397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:21:08.040 [2024-07-15 11:38:56.890419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:8984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:08.040 [2024-07-15 11:38:56.890433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:21:08.040 [2024-07-15 11:38:56.890455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:8992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:08.040 [2024-07-15 11:38:56.890469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:21:08.040 [2024-07-15 11:38:56.890491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:9000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:08.040 [2024-07-15 11:38:56.890505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:21:08.040 [2024-07-15 11:38:56.890527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:9008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:08.040 [2024-07-15 11:38:56.890541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:21:08.040 [2024-07-15 11:38:56.890578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:08.040 [2024-07-15 11:38:56.890594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:21:08.040 [2024-07-15 11:38:56.890616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:9024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:08.040 [2024-07-15 11:38:56.890639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:21:08.040 [2024-07-15 11:38:56.890662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:9032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:08.040 [2024-07-15 11:38:56.890676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:21:08.040 [2024-07-15 11:38:56.890699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:9040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:08.041 [2024-07-15 11:38:56.890713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:21:08.041 [2024-07-15 11:38:56.890735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:9048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:08.041 [2024-07-15 11:38:56.890749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:21:08.041 [2024-07-15 11:38:56.890771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:9056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:08.041 [2024-07-15 11:38:56.890785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:21:08.041 [2024-07-15 11:38:56.890807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:9064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:08.041 [2024-07-15 11:38:56.890821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:21:08.041 [2024-07-15 11:38:56.890843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:9072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:08.041 [2024-07-15 11:38:56.890857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:21:08.041 [2024-07-15 11:38:56.890878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:9080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:08.041 [2024-07-15 11:38:56.890893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:21:08.041 [2024-07-15 11:38:56.890915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:9088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:08.041 [2024-07-15 11:38:56.890930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:21:08.041 [2024-07-15 11:38:56.890954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:9096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:08.041 [2024-07-15 11:38:56.890969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:21:08.041 [2024-07-15 11:38:56.890991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:9104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:08.041 [2024-07-15 11:38:56.891005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:21:08.041 [2024-07-15 11:38:56.891029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:9112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:08.041 [2024-07-15 11:38:56.891044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:21:08.041 [2024-07-15 11:38:56.891065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:9120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:08.041 [2024-07-15 11:38:56.891079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.041 [2024-07-15 11:38:56.891107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:9128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:08.041 [2024-07-15 11:38:56.891123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:08.041 [2024-07-15 11:38:56.891145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:9136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:08.041 [2024-07-15 11:38:56.891159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:08.041 [2024-07-15 11:38:56.891181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:9144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:08.041 [2024-07-15 11:38:56.891196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:21:08.041 [2024-07-15 11:38:56.891885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:9152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:08.041 [2024-07-15 11:38:56.891914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:21:08.041 [2024-07-15 11:38:56.891942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:9160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:08.041 [2024-07-15 11:38:56.891958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:21:08.041 [2024-07-15 11:38:56.891979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:9168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:08.041 [2024-07-15 11:38:56.891994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:21:08.041 [2024-07-15 11:38:56.892016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:9176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:08.041 [2024-07-15 11:38:56.892030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:21:08.041 [2024-07-15 11:38:56.892051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:9184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:08.041 [2024-07-15 11:38:56.892065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:21:08.041 [2024-07-15 11:38:56.892087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:9192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:08.041 [2024-07-15 11:38:56.892102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:21:08.041 [2024-07-15 11:38:56.892124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:9200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:08.041 [2024-07-15 11:38:56.892138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:21:08.041 [2024-07-15 11:38:56.892159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:9208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:08.041 [2024-07-15 11:38:56.892174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:21:08.041 [2024-07-15 11:38:56.892196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:8760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.041 [2024-07-15 11:38:56.892210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:21:08.041 [2024-07-15 11:38:56.892246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:8768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.041 [2024-07-15 11:38:56.892262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:21:08.041 [2024-07-15 11:38:56.892284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:8776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.041 [2024-07-15 11:38:56.892299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:21:08.041 [2024-07-15 11:38:56.892322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:8784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.041 [2024-07-15 11:38:56.892338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:21:08.041 [2024-07-15 11:38:56.892360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:8792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.041 [2024-07-15 11:38:56.892374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:21:08.041 [2024-07-15 11:38:56.892396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.041 [2024-07-15 11:38:56.892410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:21:08.041 [2024-07-15 11:38:56.892432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:8808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.041 [2024-07-15 11:38:56.892447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:21:08.041 [2024-07-15 11:38:56.892469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:8816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.041 [2024-07-15 11:38:56.892484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:21:08.041 [2024-07-15 11:39:03.562690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:62192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:08.041 [2024-07-15 11:39:03.562766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:21:08.041 [2024-07-15 11:39:03.562830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:62200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:08.041 [2024-07-15 11:39:03.562852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:21:08.041 [2024-07-15 11:39:03.562877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:62208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:08.041 [2024-07-15 11:39:03.562892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:21:08.041 [2024-07-15 11:39:03.562915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:62216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:08.041 [2024-07-15 11:39:03.562929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:21:08.041 [2024-07-15 11:39:03.562951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:62224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:08.041 [2024-07-15 11:39:03.562966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:21:08.041 [2024-07-15 11:39:03.562988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:62232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:08.041 [2024-07-15 11:39:03.563031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:08.041 [2024-07-15 11:39:03.563055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:62240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:08.041 [2024-07-15 11:39:03.563070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:08.041 [2024-07-15 11:39:03.563092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:62248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:08.041 [2024-07-15 11:39:03.563107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:21:08.041 [2024-07-15 11:39:03.563129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:61552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.041 [2024-07-15 11:39:03.563143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:21:08.041 [2024-07-15 11:39:03.563165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:61560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.041 [2024-07-15 11:39:03.563180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:21:08.041 [2024-07-15 11:39:03.563203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:61568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.041 [2024-07-15 11:39:03.563218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:21:08.041 [2024-07-15 11:39:03.563239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:61576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.041 [2024-07-15 11:39:03.563254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:21:08.041 [2024-07-15 11:39:03.563275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:61584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.041 [2024-07-15 11:39:03.563290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:21:08.041 [2024-07-15 11:39:03.563312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:61592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.041 [2024-07-15 11:39:03.563326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:21:08.041 [2024-07-15 11:39:03.563348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:61600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.041 [2024-07-15 11:39:03.563362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:21:08.041 [2024-07-15 11:39:03.563384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:61608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.041 [2024-07-15 11:39:03.563398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:21:08.041 [2024-07-15 11:39:03.563420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:61616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.041 [2024-07-15 11:39:03.563435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:21:08.041 [2024-07-15 11:39:03.563457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:61624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.041 [2024-07-15 11:39:03.563480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:21:08.041 [2024-07-15 11:39:03.563503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:61632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.041 [2024-07-15 11:39:03.563518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:21:08.041 [2024-07-15 11:39:03.563542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:61640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.041 [2024-07-15 11:39:03.563574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:21:08.041 [2024-07-15 11:39:03.563599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:61648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.041 [2024-07-15 11:39:03.563614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:21:08.041 [2024-07-15 11:39:03.563637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:61656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.041 [2024-07-15 11:39:03.563651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:21:08.041 [2024-07-15 11:39:03.563673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:61664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.041 [2024-07-15 11:39:03.563687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:21:08.041 [2024-07-15 11:39:03.563709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:61672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.041 [2024-07-15 11:39:03.563724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:21:08.041 [2024-07-15 11:39:03.563999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:62256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:08.041 [2024-07-15 11:39:03.564026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:21:08.041 [2024-07-15 11:39:03.564056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:61680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.041 [2024-07-15 11:39:03.564072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:21:08.041 [2024-07-15 11:39:03.564096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:61688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.041 [2024-07-15 11:39:03.564111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:21:08.041 [2024-07-15 11:39:03.564135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:61696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.041 [2024-07-15 11:39:03.564150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:21:08.041 [2024-07-15 11:39:03.564174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:61704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.041 [2024-07-15 11:39:03.564189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:21:08.041 [2024-07-15 11:39:03.564213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:61712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.041 [2024-07-15 11:39:03.564227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:21:08.041 [2024-07-15 11:39:03.564263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:61720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.041 [2024-07-15 11:39:03.564279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:21:08.041 [2024-07-15 11:39:03.564303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:61728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.041 [2024-07-15 11:39:03.564318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:21:08.041 [2024-07-15 11:39:03.564342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:61736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.041 [2024-07-15 11:39:03.564357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:21:08.041 [2024-07-15 11:39:03.564381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:61744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.041 [2024-07-15 11:39:03.564395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:21:08.041 [2024-07-15 11:39:03.564419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:61752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.041 [2024-07-15 11:39:03.564434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:21:08.041 [2024-07-15 11:39:03.564458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:61760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.041 [2024-07-15 11:39:03.564474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:21:08.041 [2024-07-15 11:39:03.564498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:61768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.041 [2024-07-15 11:39:03.564513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.041 [2024-07-15 11:39:03.564536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:61776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.041 [2024-07-15 11:39:03.564568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:08.042 [2024-07-15 11:39:03.564595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:61784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.042 [2024-07-15 11:39:03.564611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:08.042 [2024-07-15 11:39:03.564635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:61792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.042 [2024-07-15 11:39:03.564650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:21:08.042 [2024-07-15 11:39:03.564674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:61800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.042 [2024-07-15 11:39:03.564689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:21:08.042 [2024-07-15 11:39:03.564713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:61808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.042 [2024-07-15 11:39:03.564728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:21:08.042 [2024-07-15 11:39:03.564761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:61816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.042 [2024-07-15 11:39:03.564777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:21:08.042 [2024-07-15 11:39:03.564801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:61824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.042 [2024-07-15 11:39:03.564815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:21:08.042 [2024-07-15 11:39:03.564839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:61832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.042 [2024-07-15 11:39:03.564854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:21:08.042 [2024-07-15 11:39:03.564877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:61840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.042 [2024-07-15 11:39:03.564892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:21:08.042 [2024-07-15 11:39:03.564916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:61848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.042 [2024-07-15 11:39:03.564931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:21:08.042 [2024-07-15 11:39:03.564956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:61856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.042 [2024-07-15 11:39:03.564972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:21:08.042 [2024-07-15 11:39:03.564996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:61864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.042 [2024-07-15 11:39:03.565011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:21:08.042 [2024-07-15 11:39:03.565034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:61872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.042 [2024-07-15 11:39:03.565049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:21:08.042 [2024-07-15 11:39:03.565072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:61880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.042 [2024-07-15 11:39:03.565087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:21:08.042 [2024-07-15 11:39:03.565111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:61888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.042 [2024-07-15 11:39:03.565127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:21:08.042 [2024-07-15 11:39:03.565150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:61896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.042 [2024-07-15 11:39:03.565165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:21:08.042 [2024-07-15 11:39:03.565189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:61904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.042 [2024-07-15 11:39:03.565204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:21:08.042 [2024-07-15 11:39:03.565238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:61912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.042 [2024-07-15 11:39:03.565254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:21:08.042 [2024-07-15 11:39:03.565278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:61920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.042 [2024-07-15 11:39:03.565292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:21:08.042 [2024-07-15 11:39:03.565316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:61928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.042 [2024-07-15 11:39:03.565331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:21:08.042 [2024-07-15 11:39:03.565356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:62264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:08.042 [2024-07-15 11:39:03.565370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:21:08.042 [2024-07-15 11:39:03.565481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:62272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:08.042 [2024-07-15 11:39:03.565502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:21:08.042 [2024-07-15 11:39:03.565531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:62280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:08.042 [2024-07-15 11:39:03.565560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:21:08.042 [2024-07-15 11:39:03.565590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:62288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:08.042 [2024-07-15 11:39:03.565606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:21:08.042 [2024-07-15 11:39:03.565632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:62296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:08.042 [2024-07-15 11:39:03.565647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:21:08.042 [2024-07-15 11:39:03.565674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:62304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:08.042 [2024-07-15 11:39:03.565689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:21:08.042 [2024-07-15 11:39:03.565715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:62312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:08.042 [2024-07-15 11:39:03.565730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:21:08.042 [2024-07-15 11:39:03.565895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:62320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:08.042 [2024-07-15 11:39:03.565932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:21:08.042 [2024-07-15 11:39:03.565963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:62328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:08.042 [2024-07-15 11:39:03.565979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:21:08.042 [2024-07-15 11:39:03.566006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:62336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:08.042 [2024-07-15 11:39:03.566032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:21:08.042 [2024-07-15 11:39:03.566060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:62344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:08.042 [2024-07-15 11:39:03.566077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:21:08.042 [2024-07-15 11:39:03.566103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:62352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:08.042 [2024-07-15 11:39:03.566118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:21:08.042 [2024-07-15 11:39:03.566145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:62360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:08.042 [2024-07-15 11:39:03.566160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:08.042 [2024-07-15 11:39:03.566187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:62368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:08.042 [2024-07-15 11:39:03.566201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:08.042 [2024-07-15 11:39:03.566228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:62376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:08.042 [2024-07-15 11:39:03.566242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:21:08.042 [2024-07-15 11:39:03.566269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:62384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:08.042 [2024-07-15 11:39:03.566284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:21:08.042 [2024-07-15 11:39:03.566311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:62392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:08.042 [2024-07-15 11:39:03.566325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:21:08.042 [2024-07-15 11:39:03.566352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:62400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:08.042 [2024-07-15 11:39:03.566367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:21:08.042 [2024-07-15 11:39:03.566393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:62408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:08.042 [2024-07-15 11:39:03.566409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:21:08.042 [2024-07-15 11:39:03.566435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:62416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:08.042 [2024-07-15 11:39:03.566450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:21:08.042 [2024-07-15 11:39:03.566476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:62424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:08.042 [2024-07-15 11:39:03.566492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:21:08.042 [2024-07-15 11:39:03.566518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:62432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:08.042 [2024-07-15 11:39:03.566540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:21:08.042 [2024-07-15 11:39:03.566583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:62440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:08.042 [2024-07-15 11:39:03.566601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:21:08.042 [2024-07-15 11:39:03.566628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:62448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:08.042 [2024-07-15 11:39:03.566644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:21:08.042 [2024-07-15 11:39:03.566724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:62456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:08.042 [2024-07-15 11:39:03.566744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:21:08.042 [2024-07-15 11:39:03.566775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:62464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:08.042 [2024-07-15 11:39:03.566791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:21:08.042 [2024-07-15 11:39:03.566819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:62472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:08.042 [2024-07-15 11:39:03.566835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:21:08.042 [2024-07-15 11:39:03.566863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:62480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:08.042 [2024-07-15 11:39:03.566878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:21:08.042 [2024-07-15 11:39:03.566906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:62488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:08.042 [2024-07-15 11:39:03.566921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:21:08.042 [2024-07-15 11:39:03.566949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:62496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:08.042 [2024-07-15 11:39:03.566964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:21:08.042 [2024-07-15 11:39:03.566992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:62504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:08.042 [2024-07-15 11:39:03.567007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:21:08.042 [2024-07-15 11:39:03.567035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:62512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:08.042 [2024-07-15 11:39:03.567050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:21:08.042 [2024-07-15 11:39:03.567078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:62520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:08.042 [2024-07-15 11:39:03.567093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:21:08.042 [2024-07-15 11:39:03.567120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:62528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:08.042 [2024-07-15 11:39:03.567135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:21:08.042 [2024-07-15 11:39:03.567173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:62536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:08.042 [2024-07-15 11:39:03.567189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:21:08.042 [2024-07-15 11:39:03.567218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:62544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:08.042 [2024-07-15 11:39:03.567232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:21:08.042 [2024-07-15 11:39:03.567260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:62552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:08.042 [2024-07-15 11:39:03.567275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:21:08.042 [2024-07-15 11:39:03.567303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:62560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:08.042 [2024-07-15 11:39:03.567319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:21:08.042 [2024-07-15 11:39:03.567346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:62568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:08.042 [2024-07-15 11:39:03.567368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:21:08.042 [2024-07-15 11:39:03.567397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:61936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.042 [2024-07-15 11:39:03.567412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:21:08.042 [2024-07-15 11:39:03.567440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:61944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.042 [2024-07-15 11:39:03.567455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:21:08.042 [2024-07-15 11:39:03.567483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:61952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.042 [2024-07-15 11:39:03.567499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:21:08.042 [2024-07-15 11:39:03.567527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:61960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.042 [2024-07-15 11:39:03.567542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:21:08.042 [2024-07-15 11:39:03.567587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:61968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.042 [2024-07-15 11:39:03.567604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:21:08.042 [2024-07-15 11:39:03.567633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:61976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.042 [2024-07-15 11:39:03.567648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:08.042 [2024-07-15 11:39:03.567676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:61984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.042 [2024-07-15 11:39:03.567691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:08.042 [2024-07-15 11:39:03.567727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:61992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.042 [2024-07-15 11:39:03.567744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:21:08.043 [2024-07-15 11:39:03.567772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:62000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.043 [2024-07-15 11:39:03.567787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:21:08.043 [2024-07-15 11:39:03.567815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:62008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.043 [2024-07-15 11:39:03.567830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:21:08.043 [2024-07-15 11:39:03.567859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:62016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.043 [2024-07-15 11:39:03.567874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:21:08.043 [2024-07-15 11:39:03.567906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:62024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.043 [2024-07-15 11:39:03.567922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:21:08.043 [2024-07-15 11:39:03.567950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:62032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.043 [2024-07-15 11:39:03.567966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:21:08.043 [2024-07-15 11:39:03.567994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:62040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.043 [2024-07-15 11:39:03.568009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:21:08.043 [2024-07-15 11:39:03.568037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:62048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.043 [2024-07-15 11:39:03.568052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:21:08.043 [2024-07-15 11:39:03.568080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:62056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.043 [2024-07-15 11:39:03.568098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:21:08.043 [2024-07-15 11:39:03.568126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:62064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.043 [2024-07-15 11:39:03.568141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:21:08.043 [2024-07-15 11:39:03.568169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:62072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.043 [2024-07-15 11:39:03.568184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:21:08.043 [2024-07-15 11:39:03.568213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:62080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.043 [2024-07-15 11:39:03.568228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:21:08.043 [2024-07-15 11:39:03.568256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:62088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.043 [2024-07-15 11:39:03.568277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:21:08.043 [2024-07-15 11:39:03.568307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:62096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.043 [2024-07-15 11:39:03.568323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:21:08.043 [2024-07-15 11:39:03.568351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:62104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.043 [2024-07-15 11:39:03.568366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:21:08.043 [2024-07-15 11:39:03.568395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:62112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.043 [2024-07-15 11:39:03.568410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:21:08.043 [2024-07-15 11:39:03.568438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:62120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.043 [2024-07-15 11:39:03.568453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:21:08.043 [2024-07-15 11:39:03.568481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:62128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.043 [2024-07-15 11:39:03.568496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:21:08.043 [2024-07-15 11:39:03.568524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:62136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.043 [2024-07-15 11:39:03.568539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:21:08.043 [2024-07-15 11:39:03.568583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:62144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.043 [2024-07-15 11:39:03.568599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:21:08.043 [2024-07-15 11:39:03.568630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:62152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.043 [2024-07-15 11:39:03.568646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:21:08.043 [2024-07-15 11:39:03.568674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:62160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.043 [2024-07-15 11:39:03.568690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:21:08.043 [2024-07-15 11:39:03.568718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:62168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.043 [2024-07-15 11:39:03.568733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:21:08.043 [2024-07-15 11:39:03.568761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:62176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.043 [2024-07-15 11:39:03.568776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:21:08.043 [2024-07-15 11:39:03.568805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:62184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.043 [2024-07-15 11:39:03.568828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:21:08.043 [2024-07-15 11:39:10.734219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:75184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.043 [2024-07-15 11:39:10.734309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:21:08.043 [2024-07-15 11:39:10.734379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:75256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:08.043 [2024-07-15 11:39:10.734401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:21:08.043 [2024-07-15 11:39:10.734426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:75264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:08.043 [2024-07-15 11:39:10.734442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:21:08.043 [2024-07-15 11:39:10.734465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:75272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:08.043 [2024-07-15 11:39:10.734480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:21:08.043 [2024-07-15 11:39:10.734501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:75280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:08.043 [2024-07-15 11:39:10.734516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:08.043 [2024-07-15 11:39:10.734537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:75288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:08.043 [2024-07-15 11:39:10.734580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:08.043 [2024-07-15 11:39:10.734608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:75296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:08.043 [2024-07-15 11:39:10.734623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:21:08.043 [2024-07-15 11:39:10.734645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:75304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:08.043 [2024-07-15 11:39:10.734660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:21:08.043 [2024-07-15 11:39:10.735984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:75312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:08.043 [2024-07-15 11:39:10.736016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:21:08.043 [2024-07-15 11:39:10.736046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:75320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:08.043 [2024-07-15 11:39:10.736061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:21:08.043 [2024-07-15 11:39:10.736086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:75328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:08.043 [2024-07-15 11:39:10.736101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:21:08.043 [2024-07-15 11:39:10.736126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:75336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:08.043 [2024-07-15 11:39:10.736140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:21:08.043 [2024-07-15 11:39:10.736193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:75344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:08.043 [2024-07-15 11:39:10.736210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:21:08.043 [2024-07-15 11:39:10.736235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:75352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:08.043 [2024-07-15 11:39:10.736249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:21:08.043 [2024-07-15 11:39:10.736274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:75360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:08.043 [2024-07-15 11:39:10.736288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:21:08.043 [2024-07-15 11:39:10.736312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:75368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:08.043 [2024-07-15 11:39:10.736326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:21:08.043 [2024-07-15 11:39:10.736351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:75376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:08.043 [2024-07-15 11:39:10.736366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:21:08.043 [2024-07-15 11:39:10.736390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:75384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:08.043 [2024-07-15 11:39:10.736404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:21:08.043 [2024-07-15 11:39:10.736428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:75392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:08.043 [2024-07-15 11:39:10.736442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:21:08.043 [2024-07-15 11:39:10.736467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:75400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:08.043 [2024-07-15 11:39:10.736482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:21:08.043 [2024-07-15 11:39:10.736506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:75408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:08.043 [2024-07-15 11:39:10.736520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:21:08.043 [2024-07-15 11:39:10.736565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:75416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:08.043 [2024-07-15 11:39:10.736594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:21:08.043 [2024-07-15 11:39:10.736623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:75424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:08.043 [2024-07-15 11:39:10.736638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:21:08.043 [2024-07-15 11:39:10.736663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:75432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:08.043 [2024-07-15 11:39:10.736678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:21:08.043 [2024-07-15 11:39:10.736714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:75440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:08.043 [2024-07-15 11:39:10.736731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:21:08.043 [2024-07-15 11:39:10.736756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:75448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:08.043 [2024-07-15 11:39:10.736771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:21:08.043 [2024-07-15 11:39:10.736795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:75456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:08.043 [2024-07-15 11:39:10.736810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:21:08.043 [2024-07-15 11:39:10.736834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:75464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:08.043 [2024-07-15 11:39:10.736849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:21:08.043 [2024-07-15 11:39:10.736873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:75472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:08.043 [2024-07-15 11:39:10.736888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:21:08.043 [2024-07-15 11:39:10.736915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:75480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:08.043 [2024-07-15 11:39:10.736930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:21:08.043 [2024-07-15 11:39:10.736954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:75488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:08.043 [2024-07-15 11:39:10.736969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:21:08.043 [2024-07-15 11:39:10.736993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:75496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:08.043 [2024-07-15 11:39:10.737008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:21:08.043 [2024-07-15 11:39:10.737032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:75504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:08.043 [2024-07-15 11:39:10.737047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:21:08.043 [2024-07-15 11:39:10.737071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:75192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.043 [2024-07-15 11:39:10.737086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:21:08.043 [2024-07-15 11:39:10.737111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:75200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.043 [2024-07-15 11:39:10.737129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:21:08.043 [2024-07-15 11:39:10.737155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:75208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.043 [2024-07-15 11:39:10.737172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:21:08.043 [2024-07-15 11:39:10.737197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:75216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.043 [2024-07-15 11:39:10.737277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:08.043 [2024-07-15 11:39:10.737308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:75224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.043 [2024-07-15 11:39:10.737324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:08.043 [2024-07-15 11:39:10.737350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:75232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.043 [2024-07-15 11:39:10.737366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:21:08.043 [2024-07-15 11:39:10.737393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:75240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.043 [2024-07-15 11:39:10.737409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:21:08.043 [2024-07-15 11:39:10.737850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:75248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.043 [2024-07-15 11:39:10.737880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:21:08.043 [2024-07-15 11:39:10.737934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:75512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:08.043 [2024-07-15 11:39:10.737954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:21:08.044 [2024-07-15 11:39:10.737983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:75520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:08.044 [2024-07-15 11:39:10.737998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:21:08.044 [2024-07-15 11:39:10.738026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:75528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:08.044 [2024-07-15 11:39:10.738041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:21:08.044 [2024-07-15 11:39:10.738069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:75536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:08.044 [2024-07-15 11:39:10.738084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:21:08.044 [2024-07-15 11:39:10.738112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:75544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:08.044 [2024-07-15 11:39:10.738127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:21:08.044 [2024-07-15 11:39:10.738155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:75552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:08.044 [2024-07-15 11:39:10.738170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:21:08.044 [2024-07-15 11:39:10.738198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:75560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:08.044 [2024-07-15 11:39:10.738213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:21:08.044 [2024-07-15 11:39:10.738241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:75568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:08.044 [2024-07-15 11:39:10.738270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:21:08.044 [2024-07-15 11:39:10.738300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:75576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:08.044 [2024-07-15 11:39:10.738316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:21:08.044 [2024-07-15 11:39:10.738344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:75584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:08.044 [2024-07-15 11:39:10.738359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:21:08.044 [2024-07-15 11:39:10.738387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:75592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:08.044 [2024-07-15 11:39:10.738402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:21:08.044 [2024-07-15 11:39:10.738430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:75600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:08.044 [2024-07-15 11:39:10.738446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:21:08.044 [2024-07-15 11:39:10.738473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:75608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:08.044 [2024-07-15 11:39:10.738488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:21:08.044 [2024-07-15 11:39:10.738516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:75616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:08.044 [2024-07-15 11:39:10.738531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:21:08.044 [2024-07-15 11:39:10.738574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:75624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:08.044 [2024-07-15 11:39:10.738593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:21:08.044 [2024-07-15 11:39:10.738621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:75632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:08.044 [2024-07-15 11:39:10.738636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:21:08.044 [2024-07-15 11:39:10.738665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:75640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:08.044 [2024-07-15 11:39:10.738680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:21:08.044 [2024-07-15 11:39:10.738707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:75648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:08.044 [2024-07-15 11:39:10.738721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:21:08.044 [2024-07-15 11:39:10.738749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:75656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:08.044 [2024-07-15 11:39:10.738764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:21:08.044 [2024-07-15 11:39:10.738791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:75664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:08.044 [2024-07-15 11:39:10.738806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:21:08.044 [2024-07-15 11:39:10.738843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:75672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:08.044 [2024-07-15 11:39:10.738860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:21:08.044 [2024-07-15 11:39:10.738889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:75680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:08.044 [2024-07-15 11:39:10.738904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:21:08.044 [2024-07-15 11:39:10.738932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:75688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:08.044 [2024-07-15 11:39:10.738947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:21:08.044 [2024-07-15 11:39:10.738974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:75696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:08.044 [2024-07-15 11:39:10.738990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:21:08.044 [2024-07-15 11:39:10.739017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:75704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:08.044 [2024-07-15 11:39:10.739032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:21:08.044 [2024-07-15 11:39:10.739060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:75712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:08.044 [2024-07-15 11:39:10.739075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:21:08.044 [2024-07-15 11:39:10.739102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:75720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:08.044 [2024-07-15 11:39:10.739117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:21:08.044 [2024-07-15 11:39:10.739145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:75728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:08.044 [2024-07-15 11:39:10.739161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:08.044 [2024-07-15 11:39:10.739188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:75736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:08.044 [2024-07-15 11:39:10.739203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:08.044 [2024-07-15 11:39:10.739230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:75744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:08.044 [2024-07-15 11:39:10.739245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:21:08.044 [2024-07-15 11:39:10.739273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:75752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:08.044 [2024-07-15 11:39:10.739288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:21:08.044 [2024-07-15 11:39:10.739316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:75760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:08.044 [2024-07-15 11:39:10.739331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:21:08.044 [2024-07-15 11:39:10.739367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:75768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:08.044 [2024-07-15 11:39:10.739384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:21:08.044 [2024-07-15 11:39:10.739523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:75776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:08.044 [2024-07-15 11:39:10.739560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:21:08.044 [2024-07-15 11:39:10.739598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:75784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:08.044 [2024-07-15 11:39:10.739615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:21:08.044 [2024-07-15 11:39:10.739646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:75792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:08.044 [2024-07-15 11:39:10.739661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:21:08.044 [2024-07-15 11:39:10.739692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:75800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:08.044 [2024-07-15 11:39:10.739708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:21:08.044 [2024-07-15 11:39:10.739739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:75808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:08.044 [2024-07-15 11:39:10.739754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:21:08.044 [2024-07-15 11:39:10.739784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:75816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:08.044 [2024-07-15 11:39:10.739800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:21:08.044 [2024-07-15 11:39:10.739830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:75824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:08.044 [2024-07-15 11:39:10.739845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:21:08.044 [2024-07-15 11:39:10.739875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:75832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:08.044 [2024-07-15 11:39:10.739890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:21:08.044 [2024-07-15 11:39:10.739920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:75840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:08.044 [2024-07-15 11:39:10.739935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:21:08.044 [2024-07-15 11:39:10.739965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:75848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:08.044 [2024-07-15 11:39:10.739980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:21:08.044 [2024-07-15 11:39:10.740010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:75856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:08.044 [2024-07-15 11:39:10.740025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:21:08.044 [2024-07-15 11:39:10.740055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:75864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:08.044 [2024-07-15 11:39:10.740080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:21:08.044 [2024-07-15 11:39:10.740113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:75872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:08.044 [2024-07-15 11:39:10.740128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:21:08.044 [2024-07-15 11:39:10.740159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:75880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:08.044 [2024-07-15 11:39:10.740174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:21:08.044 [2024-07-15 11:39:10.740204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:75888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:08.044 [2024-07-15 11:39:10.740219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:21:08.044 [2024-07-15 11:39:10.740250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:75896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:08.044 [2024-07-15 11:39:10.740265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:21:08.044 [2024-07-15 11:39:10.740296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:75904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:08.044 [2024-07-15 11:39:10.740311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:21:08.044 [2024-07-15 11:39:10.740342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:75912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:08.044 [2024-07-15 11:39:10.740357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:21:08.044 [2024-07-15 11:39:10.740387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:75920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:08.044 [2024-07-15 11:39:10.740403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:21:08.044 [2024-07-15 11:39:24.347522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:103488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.044 [2024-07-15 11:39:24.347583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.044 [2024-07-15 11:39:24.347612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:103496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.044 [2024-07-15 11:39:24.347627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.044 [2024-07-15 11:39:24.347643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:103504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.044 [2024-07-15 11:39:24.347657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.044 [2024-07-15 11:39:24.347672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:103512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.044 [2024-07-15 11:39:24.347685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.044 [2024-07-15 11:39:24.347700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:103520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.044 [2024-07-15 11:39:24.347735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.044 [2024-07-15 11:39:24.347753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:103528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.044 [2024-07-15 11:39:24.347766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.045 [2024-07-15 11:39:24.347782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:103536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.045 [2024-07-15 11:39:24.347794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.045 [2024-07-15 11:39:24.347809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:103544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.045 [2024-07-15 11:39:24.347822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.045 [2024-07-15 11:39:24.347838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:103552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.045 [2024-07-15 11:39:24.347851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.045 [2024-07-15 11:39:24.347872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:103560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.045 [2024-07-15 11:39:24.347886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.045 [2024-07-15 11:39:24.347901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:103568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.045 [2024-07-15 11:39:24.347913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.045 [2024-07-15 11:39:24.347928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:103576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.045 [2024-07-15 11:39:24.347941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.045 [2024-07-15 11:39:24.347956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:103584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.045 [2024-07-15 11:39:24.347969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.045 [2024-07-15 11:39:24.347984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:103592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.045 [2024-07-15 11:39:24.347997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.045 [2024-07-15 11:39:24.348012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:103600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.045 [2024-07-15 11:39:24.348025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.045 [2024-07-15 11:39:24.348041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:103608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.045 [2024-07-15 11:39:24.348053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.045 [2024-07-15 11:39:24.348068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:103616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.045 [2024-07-15 11:39:24.348082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.045 [2024-07-15 11:39:24.348097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:103624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.045 [2024-07-15 11:39:24.348121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.045 [2024-07-15 11:39:24.348138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:103632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.045 [2024-07-15 11:39:24.348151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.045 [2024-07-15 11:39:24.348166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:103640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.045 [2024-07-15 11:39:24.348179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.045 [2024-07-15 11:39:24.348195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:103648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.045 [2024-07-15 11:39:24.348208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.045 [2024-07-15 11:39:24.348223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:103656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.045 [2024-07-15 11:39:24.348236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.045 [2024-07-15 11:39:24.348251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:103664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.045 [2024-07-15 11:39:24.348264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.045 [2024-07-15 11:39:24.348279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:103672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.045 [2024-07-15 11:39:24.348292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.045 [2024-07-15 11:39:24.348308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:103680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.045 [2024-07-15 11:39:24.348321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.045 [2024-07-15 11:39:24.348338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:103688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.045 [2024-07-15 11:39:24.348351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.045 [2024-07-15 11:39:24.348366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:103696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.045 [2024-07-15 11:39:24.348379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.045 [2024-07-15 11:39:24.348395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:103704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.045 [2024-07-15 11:39:24.348407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.045 [2024-07-15 11:39:24.348423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:103712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.045 [2024-07-15 11:39:24.348435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.045 [2024-07-15 11:39:24.348451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:103840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:08.045 [2024-07-15 11:39:24.348464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.045 [2024-07-15 11:39:24.348486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:103848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:08.045 [2024-07-15 11:39:24.348500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.045 [2024-07-15 11:39:24.348515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:103856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:08.045 [2024-07-15 11:39:24.348527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.045 [2024-07-15 11:39:24.348555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:103864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:08.045 [2024-07-15 11:39:24.348571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.045 [2024-07-15 11:39:24.348587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:103872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:08.045 [2024-07-15 11:39:24.348600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.045 [2024-07-15 11:39:24.348616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:103880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:08.045 [2024-07-15 11:39:24.348629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.045 [2024-07-15 11:39:24.348644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:103888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:08.045 [2024-07-15 11:39:24.348657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.045 [2024-07-15 11:39:24.348673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:103896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:08.045 [2024-07-15 11:39:24.348686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.045 [2024-07-15 11:39:24.348701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:103904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:08.045 [2024-07-15 11:39:24.348714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.045 [2024-07-15 11:39:24.348728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:103912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:08.045 [2024-07-15 11:39:24.348742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.045 [2024-07-15 11:39:24.348756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:103920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:08.045 [2024-07-15 11:39:24.348769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.045 [2024-07-15 11:39:24.348785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:103928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:08.045 [2024-07-15 11:39:24.348798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.045 [2024-07-15 11:39:24.348814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:103936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:08.045 [2024-07-15 11:39:24.348828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.045 [2024-07-15 11:39:24.348843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:103944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:08.045 [2024-07-15 11:39:24.348863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.045 [2024-07-15 11:39:24.348879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:103952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:08.045 [2024-07-15 11:39:24.348892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.045 [2024-07-15 11:39:24.348907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:103960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:08.045 [2024-07-15 11:39:24.348920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.045 [2024-07-15 11:39:24.348935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:103968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:08.045 [2024-07-15 11:39:24.348948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.045 [2024-07-15 11:39:24.348963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:103976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:08.045 [2024-07-15 11:39:24.348976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.045 [2024-07-15 11:39:24.348991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:103984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:08.045 [2024-07-15 11:39:24.349004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.045 [2024-07-15 11:39:24.349020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:103992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:08.045 [2024-07-15 11:39:24.349033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.045 [2024-07-15 11:39:24.349053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:104000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:08.045 [2024-07-15 11:39:24.349066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.045 [2024-07-15 11:39:24.349081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:104008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:08.045 [2024-07-15 11:39:24.349094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.045 [2024-07-15 11:39:24.349109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:104016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:08.045 [2024-07-15 11:39:24.349122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.045 [2024-07-15 11:39:24.349137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:104024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:08.045 [2024-07-15 11:39:24.349150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.045 [2024-07-15 11:39:24.349166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:104032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:08.045 [2024-07-15 11:39:24.349178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.045 [2024-07-15 11:39:24.349193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:104040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:08.045 [2024-07-15 11:39:24.349206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.045 [2024-07-15 11:39:24.349226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:104048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:08.045 [2024-07-15 11:39:24.349241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.045 [2024-07-15 11:39:24.349256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:104056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:08.045 [2024-07-15 11:39:24.349269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.045 [2024-07-15 11:39:24.349286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:104064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:08.045 [2024-07-15 11:39:24.349299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.045 [2024-07-15 11:39:24.349314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:104072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:08.045 [2024-07-15 11:39:24.349327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.045 [2024-07-15 11:39:24.349342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:104080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:08.045 [2024-07-15 11:39:24.349355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.045 [2024-07-15 11:39:24.349370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:104088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:08.045 [2024-07-15 11:39:24.349383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.045 [2024-07-15 11:39:24.349398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:104096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:08.045 [2024-07-15 11:39:24.349411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.045 [2024-07-15 11:39:24.349426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:104104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:08.045 [2024-07-15 11:39:24.349439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.045 [2024-07-15 11:39:24.349454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:104112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:08.045 [2024-07-15 11:39:24.349468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.045 [2024-07-15 11:39:24.349482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:104120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:08.045 [2024-07-15 11:39:24.349495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.045 [2024-07-15 11:39:24.349512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:104128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:08.045 [2024-07-15 11:39:24.349525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.045 [2024-07-15 11:39:24.349540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:104136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:08.045 [2024-07-15 11:39:24.349565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.045 [2024-07-15 11:39:24.349581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:104144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:08.045 [2024-07-15 11:39:24.349601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.045 [2024-07-15 11:39:24.349617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:104152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:08.045 [2024-07-15 11:39:24.349630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.045 [2024-07-15 11:39:24.349646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:104160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:08.045 [2024-07-15 11:39:24.349659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.045 [2024-07-15 11:39:24.349673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:104168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:08.045 [2024-07-15 11:39:24.349686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.045 [2024-07-15 11:39:24.349702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:104176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:08.045 [2024-07-15 11:39:24.349715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.045 [2024-07-15 11:39:24.349730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:104184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:08.045 [2024-07-15 11:39:24.349743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.045 [2024-07-15 11:39:24.349760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:104192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:08.045 [2024-07-15 11:39:24.349773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.045 [2024-07-15 11:39:24.349788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:104200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:08.045 [2024-07-15 11:39:24.349801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.046 [2024-07-15 11:39:24.349816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:104208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:08.046 [2024-07-15 11:39:24.349829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.046 [2024-07-15 11:39:24.349844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:104216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:08.046 [2024-07-15 11:39:24.349856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.046 [2024-07-15 11:39:24.349872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:104224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:08.046 [2024-07-15 11:39:24.349885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.046 [2024-07-15 11:39:24.349899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:104232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:08.046 [2024-07-15 11:39:24.349925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.046 [2024-07-15 11:39:24.349943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:103720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.046 [2024-07-15 11:39:24.349957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.046 [2024-07-15 11:39:24.349979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:103728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.046 [2024-07-15 11:39:24.349993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.046 [2024-07-15 11:39:24.350011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:103736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.046 [2024-07-15 11:39:24.350025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.046 [2024-07-15 11:39:24.350040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:103744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.046 [2024-07-15 11:39:24.350053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.046 [2024-07-15 11:39:24.350069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:103752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.046 [2024-07-15 11:39:24.350082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.046 [2024-07-15 11:39:24.350097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:103760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.046 [2024-07-15 11:39:24.350110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.046 [2024-07-15 11:39:24.350125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:103768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.046 [2024-07-15 11:39:24.350138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.046 [2024-07-15 11:39:24.350153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:103776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.046 [2024-07-15 11:39:24.350166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.046 [2024-07-15 11:39:24.350182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:103784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.046 [2024-07-15 11:39:24.350195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.046 [2024-07-15 11:39:24.350210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:103792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.046 [2024-07-15 11:39:24.350223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.046 [2024-07-15 11:39:24.350240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:103800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.046 [2024-07-15 11:39:24.350253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.046 [2024-07-15 11:39:24.350269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:103808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.046 [2024-07-15 11:39:24.350281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.046 [2024-07-15 11:39:24.350297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:103816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.046 [2024-07-15 11:39:24.350309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.046 [2024-07-15 11:39:24.350325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:103824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.046 [2024-07-15 11:39:24.350343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.046 [2024-07-15 11:39:24.350360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:103832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.046 [2024-07-15 11:39:24.350373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.046 [2024-07-15 11:39:24.350388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:104240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:08.046 [2024-07-15 11:39:24.350401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.046 [2024-07-15 11:39:24.350417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:104248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:08.046 [2024-07-15 11:39:24.350429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.046 [2024-07-15 11:39:24.350444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:104256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:08.046 [2024-07-15 11:39:24.350457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.046 [2024-07-15 11:39:24.350475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:104264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:08.046 [2024-07-15 11:39:24.350488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.046 [2024-07-15 11:39:24.350503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:104272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:08.046 [2024-07-15 11:39:24.350516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.046 [2024-07-15 11:39:24.350531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:104280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:08.046 [2024-07-15 11:39:24.350555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.046 [2024-07-15 11:39:24.350573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:104288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:08.046 [2024-07-15 11:39:24.350587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.046 [2024-07-15 11:39:24.350602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:104296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:08.046 [2024-07-15 11:39:24.350615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.046 [2024-07-15 11:39:24.350630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:104304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:08.046 [2024-07-15 11:39:24.350643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.046 [2024-07-15 11:39:24.350658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:104312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:08.046 [2024-07-15 11:39:24.350671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.046 [2024-07-15 11:39:24.350686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:104320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:08.046 [2024-07-15 11:39:24.350699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.046 [2024-07-15 11:39:24.350717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:104328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:08.046 [2024-07-15 11:39:24.350736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.046 [2024-07-15 11:39:24.350752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:104336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:08.046 [2024-07-15 11:39:24.350765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.046 [2024-07-15 11:39:24.350780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:104344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:08.046 [2024-07-15 11:39:24.350793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.046 [2024-07-15 11:39:24.350808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:104352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:08.046 [2024-07-15 11:39:24.350821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.046 [2024-07-15 11:39:24.350836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:104360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:08.046 [2024-07-15 11:39:24.350849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.046 [2024-07-15 11:39:24.350864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:104368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:08.046 [2024-07-15 11:39:24.350877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.046 [2024-07-15 11:39:24.350892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:104376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:08.046 [2024-07-15 11:39:24.350904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.046 [2024-07-15 11:39:24.350919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:104384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:08.046 [2024-07-15 11:39:24.350932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.046 [2024-07-15 11:39:24.350949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:104392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:08.046 [2024-07-15 11:39:24.350963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.046 [2024-07-15 11:39:24.350978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:104400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:08.046 [2024-07-15 11:39:24.350991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.046 [2024-07-15 11:39:24.351006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:104408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:08.046 [2024-07-15 11:39:24.351019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.046 [2024-07-15 11:39:24.351034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:104416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:08.046 [2024-07-15 11:39:24.351047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.046 [2024-07-15 11:39:24.351062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:104424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:08.046 [2024-07-15 11:39:24.351075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.046 [2024-07-15 11:39:24.351096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:104432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:08.046 [2024-07-15 11:39:24.351110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.046 [2024-07-15 11:39:24.351124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:104440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:08.046 [2024-07-15 11:39:24.351137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.046 [2024-07-15 11:39:24.351152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:104448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:08.046 [2024-07-15 11:39:24.351165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.046 [2024-07-15 11:39:24.351183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:104456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:08.046 [2024-07-15 11:39:24.351196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.046 [2024-07-15 11:39:24.351211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:104464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:08.046 [2024-07-15 11:39:24.351224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.046 [2024-07-15 11:39:24.351239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:104472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:08.046 [2024-07-15 11:39:24.351252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.046 [2024-07-15 11:39:24.351268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:104480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:08.046 [2024-07-15 11:39:24.351280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.046 [2024-07-15 11:39:24.351295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:104488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:08.046 [2024-07-15 11:39:24.351308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.046 [2024-07-15 11:39:24.351346] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:08.046 [2024-07-15 11:39:24.351362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:104496 len:8 PRP1 0x0 PRP2 0x0 00:21:08.046 [2024-07-15 11:39:24.351375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.046 [2024-07-15 11:39:24.351393] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:08.046 [2024-07-15 11:39:24.351403] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:08.046 [2024-07-15 11:39:24.351413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:104504 len:8 PRP1 0x0 PRP2 0x0 00:21:08.046 [2024-07-15 11:39:24.351428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.046 [2024-07-15 11:39:24.351480] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xbb4500 was disconnected and freed. reset controller. 00:21:08.046 [2024-07-15 11:39:24.351604] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:08.046 [2024-07-15 11:39:24.351629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.046 [2024-07-15 11:39:24.351655] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:08.046 [2024-07-15 11:39:24.351671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.046 [2024-07-15 11:39:24.351684] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:08.046 [2024-07-15 11:39:24.351697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.046 [2024-07-15 11:39:24.351711] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:08.046 [2024-07-15 11:39:24.351724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.046 [2024-07-15 11:39:24.351737] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd804d0 is same with the state(5) to be set 00:21:08.046 [2024-07-15 11:39:24.353185] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:08.046 [2024-07-15 11:39:24.353235] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd804d0 (9): Bad file descriptor 00:21:08.046 [2024-07-15 11:39:24.353348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:08.046 [2024-07-15 11:39:24.353378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd804d0 with addr=10.0.0.2, port=4421 00:21:08.046 [2024-07-15 11:39:24.353394] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd804d0 is same with the state(5) to be set 00:21:08.046 [2024-07-15 11:39:24.353422] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd804d0 (9): Bad file descriptor 00:21:08.046 [2024-07-15 11:39:24.353445] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:08.046 [2024-07-15 11:39:24.353459] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:08.046 [2024-07-15 11:39:24.353474] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:08.046 [2024-07-15 11:39:24.353499] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:08.046 [2024-07-15 11:39:24.353514] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:08.046 [2024-07-15 11:39:34.453524] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:21:08.046 Received shutdown signal, test time was about 56.001317 seconds 00:21:08.046 00:21:08.046 Latency(us) 00:21:08.046 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:08.046 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:21:08.046 Verification LBA range: start 0x0 length 0x4000 00:21:08.046 Nvme0n1 : 56.00 6961.86 27.19 0.00 0.00 18352.24 670.25 7046430.72 00:21:08.046 =================================================================================================================== 00:21:08.046 Total : 6961.86 27.19 0.00 0.00 18352.24 670.25 7046430.72 00:21:08.046 11:39:44 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:08.046 11:39:45 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@122 -- # trap - SIGINT SIGTERM EXIT 00:21:08.046 11:39:45 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@124 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:21:08.046 11:39:45 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@125 -- # nvmftestfini 00:21:08.047 11:39:45 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:08.047 11:39:45 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@117 -- # sync 00:21:08.047 11:39:45 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:08.047 11:39:45 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@120 -- # set +e 00:21:08.047 11:39:45 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:08.047 11:39:45 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:08.047 rmmod nvme_tcp 00:21:08.047 rmmod nvme_fabrics 00:21:08.047 rmmod nvme_keyring 00:21:08.047 11:39:45 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:08.047 11:39:45 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@124 -- # set -e 00:21:08.047 11:39:45 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@125 -- # return 0 00:21:08.047 11:39:45 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@489 -- # '[' -n 94751 ']' 00:21:08.047 11:39:45 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@490 -- # killprocess 94751 00:21:08.047 11:39:45 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@948 -- # '[' -z 94751 ']' 00:21:08.047 11:39:45 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@952 -- # kill -0 94751 00:21:08.047 11:39:45 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@953 -- # uname 00:21:08.047 11:39:45 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:08.047 11:39:45 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 94751 00:21:08.047 11:39:45 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:21:08.047 11:39:45 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:21:08.047 killing process with pid 94751 00:21:08.047 11:39:45 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@966 -- # echo 'killing process with pid 94751' 00:21:08.047 11:39:45 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@967 -- # kill 94751 00:21:08.047 11:39:45 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@972 -- # wait 94751 00:21:08.047 11:39:45 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:08.047 11:39:45 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:21:08.047 11:39:45 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:21:08.047 11:39:45 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:08.047 11:39:45 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:08.047 11:39:45 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:08.047 11:39:45 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:08.047 11:39:45 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:08.047 11:39:45 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:21:08.047 00:21:08.047 real 1m1.566s 00:21:08.047 user 2m54.980s 00:21:08.047 sys 0m13.691s 00:21:08.047 11:39:45 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@1124 -- # xtrace_disable 00:21:08.047 11:39:45 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:21:08.047 ************************************ 00:21:08.047 END TEST nvmf_host_multipath 00:21:08.047 ************************************ 00:21:08.047 11:39:45 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:21:08.047 11:39:45 nvmf_tcp -- nvmf/nvmf.sh@118 -- # run_test nvmf_timeout /home/vagrant/spdk_repo/spdk/test/nvmf/host/timeout.sh --transport=tcp 00:21:08.047 11:39:45 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:21:08.047 11:39:45 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:08.047 11:39:45 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:21:08.047 ************************************ 00:21:08.047 START TEST nvmf_timeout 00:21:08.047 ************************************ 00:21:08.047 11:39:45 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/timeout.sh --transport=tcp 00:21:08.385 * Looking for test storage... 00:21:08.385 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:21:08.385 11:39:45 nvmf_tcp.nvmf_timeout -- host/timeout.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:21:08.385 11:39:45 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@7 -- # uname -s 00:21:08.385 11:39:45 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:08.385 11:39:45 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:08.385 11:39:45 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:08.385 11:39:45 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:08.385 11:39:45 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:08.385 11:39:45 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:08.385 11:39:45 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:08.385 11:39:45 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:08.385 11:39:45 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:08.385 11:39:45 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:08.385 11:39:45 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:891080d4-f96c-4735-b9e2-e3ce9892e421 00:21:08.385 11:39:45 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@18 -- # NVME_HOSTID=891080d4-f96c-4735-b9e2-e3ce9892e421 00:21:08.385 11:39:45 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:08.385 11:39:45 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:08.385 11:39:45 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:21:08.385 11:39:45 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:08.385 11:39:45 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:21:08.385 11:39:45 nvmf_tcp.nvmf_timeout -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:08.385 11:39:45 nvmf_tcp.nvmf_timeout -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:08.385 11:39:45 nvmf_tcp.nvmf_timeout -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:08.385 11:39:45 nvmf_tcp.nvmf_timeout -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:08.386 11:39:45 nvmf_tcp.nvmf_timeout -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:08.386 11:39:45 nvmf_tcp.nvmf_timeout -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:08.386 11:39:45 nvmf_tcp.nvmf_timeout -- paths/export.sh@5 -- # export PATH 00:21:08.386 11:39:45 nvmf_tcp.nvmf_timeout -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:08.386 11:39:45 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@47 -- # : 0 00:21:08.386 11:39:45 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:08.386 11:39:45 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:08.386 11:39:45 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:08.386 11:39:45 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:08.386 11:39:45 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:08.386 11:39:45 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:08.386 11:39:45 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:08.386 11:39:45 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:08.386 11:39:45 nvmf_tcp.nvmf_timeout -- host/timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:21:08.386 11:39:45 nvmf_tcp.nvmf_timeout -- host/timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:21:08.386 11:39:45 nvmf_tcp.nvmf_timeout -- host/timeout.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:21:08.386 11:39:45 nvmf_tcp.nvmf_timeout -- host/timeout.sh@15 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:21:08.386 11:39:45 nvmf_tcp.nvmf_timeout -- host/timeout.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:08.386 11:39:45 nvmf_tcp.nvmf_timeout -- host/timeout.sh@19 -- # nvmftestinit 00:21:08.386 11:39:45 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:21:08.386 11:39:45 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:08.386 11:39:45 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:08.386 11:39:45 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:08.386 11:39:45 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:08.386 11:39:45 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:08.386 11:39:45 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:08.386 11:39:45 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:08.386 11:39:45 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:21:08.386 11:39:45 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:21:08.386 11:39:45 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:21:08.386 11:39:45 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:21:08.386 11:39:45 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:21:08.386 11:39:45 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@432 -- # nvmf_veth_init 00:21:08.386 11:39:45 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:08.386 11:39:45 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:08.386 11:39:45 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:21:08.386 11:39:45 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:21:08.386 11:39:45 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:21:08.386 11:39:45 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:21:08.386 11:39:45 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:21:08.386 11:39:45 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:08.386 11:39:45 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:21:08.386 11:39:45 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:21:08.386 11:39:45 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:21:08.386 11:39:45 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:21:08.386 11:39:45 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:21:08.386 11:39:45 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:21:08.386 Cannot find device "nvmf_tgt_br" 00:21:08.386 11:39:45 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@155 -- # true 00:21:08.386 11:39:45 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:21:08.386 Cannot find device "nvmf_tgt_br2" 00:21:08.386 11:39:45 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@156 -- # true 00:21:08.386 11:39:45 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:21:08.386 11:39:45 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:21:08.386 Cannot find device "nvmf_tgt_br" 00:21:08.386 11:39:45 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@158 -- # true 00:21:08.386 11:39:45 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:21:08.386 Cannot find device "nvmf_tgt_br2" 00:21:08.386 11:39:45 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@159 -- # true 00:21:08.386 11:39:45 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:21:08.386 11:39:45 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:21:08.386 11:39:45 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:21:08.386 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:08.386 11:39:45 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@162 -- # true 00:21:08.386 11:39:45 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:21:08.386 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:08.386 11:39:45 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@163 -- # true 00:21:08.386 11:39:45 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:21:08.386 11:39:45 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:21:08.386 11:39:45 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:21:08.386 11:39:45 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:21:08.386 11:39:45 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:21:08.386 11:39:45 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:21:08.386 11:39:45 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:21:08.386 11:39:45 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:21:08.386 11:39:45 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:21:08.386 11:39:45 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:21:08.386 11:39:45 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:21:08.386 11:39:45 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:21:08.386 11:39:45 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:21:08.386 11:39:45 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:21:08.386 11:39:45 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:21:08.386 11:39:45 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:21:08.655 11:39:45 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:21:08.655 11:39:45 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:21:08.655 11:39:45 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:21:08.655 11:39:45 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:21:08.655 11:39:45 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:21:08.655 11:39:45 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:21:08.655 11:39:45 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:21:08.655 11:39:45 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:21:08.655 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:08.655 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.100 ms 00:21:08.655 00:21:08.655 --- 10.0.0.2 ping statistics --- 00:21:08.655 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:08.655 rtt min/avg/max/mdev = 0.100/0.100/0.100/0.000 ms 00:21:08.655 11:39:45 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:21:08.655 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:21:08.655 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.074 ms 00:21:08.655 00:21:08.655 --- 10.0.0.3 ping statistics --- 00:21:08.655 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:08.655 rtt min/avg/max/mdev = 0.074/0.074/0.074/0.000 ms 00:21:08.655 11:39:45 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:21:08.655 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:08.655 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.048 ms 00:21:08.655 00:21:08.655 --- 10.0.0.1 ping statistics --- 00:21:08.655 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:08.655 rtt min/avg/max/mdev = 0.048/0.048/0.048/0.000 ms 00:21:08.655 11:39:45 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:08.655 11:39:45 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@433 -- # return 0 00:21:08.655 11:39:45 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:21:08.655 11:39:45 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:08.655 11:39:45 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:21:08.655 11:39:45 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:21:08.655 11:39:45 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:08.655 11:39:45 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:21:08.655 11:39:45 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:21:08.655 11:39:45 nvmf_tcp.nvmf_timeout -- host/timeout.sh@21 -- # nvmfappstart -m 0x3 00:21:08.655 11:39:45 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:08.655 11:39:45 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:08.655 11:39:45 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:21:08.655 11:39:45 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@481 -- # nvmfpid=96103 00:21:08.655 11:39:45 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:21:08.655 11:39:45 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@482 -- # waitforlisten 96103 00:21:08.655 11:39:45 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@829 -- # '[' -z 96103 ']' 00:21:08.655 11:39:45 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:08.655 11:39:45 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:08.655 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:08.655 11:39:45 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:08.655 11:39:45 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:08.655 11:39:45 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:21:08.655 [2024-07-15 11:39:45.993643] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:21:08.655 [2024-07-15 11:39:45.993765] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:08.914 [2024-07-15 11:39:46.132112] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:21:08.914 [2024-07-15 11:39:46.201161] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:08.914 [2024-07-15 11:39:46.201248] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:08.914 [2024-07-15 11:39:46.201270] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:08.914 [2024-07-15 11:39:46.201287] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:08.914 [2024-07-15 11:39:46.201300] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:08.914 [2024-07-15 11:39:46.203590] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:08.914 [2024-07-15 11:39:46.203642] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:09.843 11:39:47 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:09.843 11:39:47 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@862 -- # return 0 00:21:09.843 11:39:47 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:09.843 11:39:47 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:09.843 11:39:47 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:21:09.843 11:39:47 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:09.843 11:39:47 nvmf_tcp.nvmf_timeout -- host/timeout.sh@23 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid || :; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:09.843 11:39:47 nvmf_tcp.nvmf_timeout -- host/timeout.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:21:10.100 [2024-07-15 11:39:47.331203] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:10.100 11:39:47 nvmf_tcp.nvmf_timeout -- host/timeout.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:21:10.357 Malloc0 00:21:10.357 11:39:47 nvmf_tcp.nvmf_timeout -- host/timeout.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:10.613 11:39:47 nvmf_tcp.nvmf_timeout -- host/timeout.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:21:10.869 11:39:48 nvmf_tcp.nvmf_timeout -- host/timeout.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:11.126 [2024-07-15 11:39:48.468060] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:11.126 11:39:48 nvmf_tcp.nvmf_timeout -- host/timeout.sh@32 -- # bdevperf_pid=96200 00:21:11.126 11:39:48 nvmf_tcp.nvmf_timeout -- host/timeout.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -f 00:21:11.126 11:39:48 nvmf_tcp.nvmf_timeout -- host/timeout.sh@34 -- # waitforlisten 96200 /var/tmp/bdevperf.sock 00:21:11.126 11:39:48 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@829 -- # '[' -z 96200 ']' 00:21:11.126 11:39:48 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:11.126 11:39:48 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:11.126 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:11.126 11:39:48 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:11.126 11:39:48 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:11.126 11:39:48 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:21:11.126 [2024-07-15 11:39:48.534460] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:21:11.126 [2024-07-15 11:39:48.534567] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid96200 ] 00:21:11.382 [2024-07-15 11:39:48.665967] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:11.382 [2024-07-15 11:39:48.741810] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:12.312 11:39:49 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:12.312 11:39:49 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@862 -- # return 0 00:21:12.312 11:39:49 nvmf_tcp.nvmf_timeout -- host/timeout.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:21:12.569 11:39:49 nvmf_tcp.nvmf_timeout -- host/timeout.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --reconnect-delay-sec 2 00:21:12.827 NVMe0n1 00:21:12.827 11:39:50 nvmf_tcp.nvmf_timeout -- host/timeout.sh@51 -- # rpc_pid=96246 00:21:12.827 11:39:50 nvmf_tcp.nvmf_timeout -- host/timeout.sh@50 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:12.827 11:39:50 nvmf_tcp.nvmf_timeout -- host/timeout.sh@53 -- # sleep 1 00:21:12.827 Running I/O for 10 seconds... 00:21:13.758 11:39:51 nvmf_tcp.nvmf_timeout -- host/timeout.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:14.016 [2024-07-15 11:39:51.366676] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa30900 is same with the state(5) to be set 00:21:14.016 [2024-07-15 11:39:51.366732] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa30900 is same with the state(5) to be set 00:21:14.016 [2024-07-15 11:39:51.366743] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa30900 is same with the state(5) to be set 00:21:14.016 [2024-07-15 11:39:51.366751] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa30900 is same with the state(5) to be set 00:21:14.016 [2024-07-15 11:39:51.366759] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa30900 is same with the state(5) to be set 00:21:14.016 [2024-07-15 11:39:51.366768] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa30900 is same with the state(5) to be set 00:21:14.016 [2024-07-15 11:39:51.366776] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa30900 is same with the state(5) to be set 00:21:14.016 [2024-07-15 11:39:51.366784] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa30900 is same with the state(5) to be set 00:21:14.016 [2024-07-15 11:39:51.366792] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa30900 is same with the state(5) to be set 00:21:14.016 [2024-07-15 11:39:51.366800] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa30900 is same with the state(5) to be set 00:21:14.016 [2024-07-15 11:39:51.366808] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa30900 is same with the state(5) to be set 00:21:14.017 [2024-07-15 11:39:51.366816] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa30900 is same with the state(5) to be set 00:21:14.017 [2024-07-15 11:39:51.366824] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa30900 is same with the state(5) to be set 00:21:14.017 [2024-07-15 11:39:51.366832] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa30900 is same with the state(5) to be set 00:21:14.017 [2024-07-15 11:39:51.366840] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa30900 is same with the state(5) to be set 00:21:14.017 [2024-07-15 11:39:51.366847] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa30900 is same with the state(5) to be set 00:21:14.017 [2024-07-15 11:39:51.366855] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa30900 is same with the state(5) to be set 00:21:14.017 [2024-07-15 11:39:51.366863] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa30900 is same with the state(5) to be set 00:21:14.017 [2024-07-15 11:39:51.366871] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa30900 is same with the state(5) to be set 00:21:14.017 [2024-07-15 11:39:51.366879] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa30900 is same with the state(5) to be set 00:21:14.017 [2024-07-15 11:39:51.366886] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa30900 is same with the state(5) to be set 00:21:14.017 [2024-07-15 11:39:51.366894] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa30900 is same with the state(5) to be set 00:21:14.017 [2024-07-15 11:39:51.366902] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa30900 is same with the state(5) to be set 00:21:14.017 [2024-07-15 11:39:51.366910] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa30900 is same with the state(5) to be set 00:21:14.017 [2024-07-15 11:39:51.366918] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa30900 is same with the state(5) to be set 00:21:14.017 [2024-07-15 11:39:51.366926] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa30900 is same with the state(5) to be set 00:21:14.017 [2024-07-15 11:39:51.366934] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa30900 is same with the state(5) to be set 00:21:14.017 [2024-07-15 11:39:51.366942] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa30900 is same with the state(5) to be set 00:21:14.017 [2024-07-15 11:39:51.366950] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa30900 is same with the state(5) to be set 00:21:14.017 [2024-07-15 11:39:51.366958] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa30900 is same with the state(5) to be set 00:21:14.017 [2024-07-15 11:39:51.366966] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa30900 is same with the state(5) to be set 00:21:14.017 [2024-07-15 11:39:51.366975] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa30900 is same with the state(5) to be set 00:21:14.017 [2024-07-15 11:39:51.366983] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa30900 is same with the state(5) to be set 00:21:14.017 [2024-07-15 11:39:51.366990] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa30900 is same with the state(5) to be set 00:21:14.017 [2024-07-15 11:39:51.366999] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa30900 is same with the state(5) to be set 00:21:14.017 [2024-07-15 11:39:51.367007] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa30900 is same with the state(5) to be set 00:21:14.017 [2024-07-15 11:39:51.367015] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa30900 is same with the state(5) to be set 00:21:14.017 [2024-07-15 11:39:51.367023] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa30900 is same with the state(5) to be set 00:21:14.017 [2024-07-15 11:39:51.367031] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa30900 is same with the state(5) to be set 00:21:14.017 [2024-07-15 11:39:51.367039] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa30900 is same with the state(5) to be set 00:21:14.017 [2024-07-15 11:39:51.367047] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa30900 is same with the state(5) to be set 00:21:14.017 [2024-07-15 11:39:51.367055] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa30900 is same with the state(5) to be set 00:21:14.017 [2024-07-15 11:39:51.367062] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa30900 is same with the state(5) to be set 00:21:14.017 [2024-07-15 11:39:51.367070] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa30900 is same with the state(5) to be set 00:21:14.017 [2024-07-15 11:39:51.367078] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa30900 is same with the state(5) to be set 00:21:14.017 [2024-07-15 11:39:51.367086] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa30900 is same with the state(5) to be set 00:21:14.017 [2024-07-15 11:39:51.367094] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa30900 is same with the state(5) to be set 00:21:14.017 [2024-07-15 11:39:51.367102] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa30900 is same with the state(5) to be set 00:21:14.017 [2024-07-15 11:39:51.367109] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa30900 is same with the state(5) to be set 00:21:14.017 [2024-07-15 11:39:51.367118] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa30900 is same with the state(5) to be set 00:21:14.017 [2024-07-15 11:39:51.367125] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa30900 is same with the state(5) to be set 00:21:14.017 [2024-07-15 11:39:51.367133] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa30900 is same with the state(5) to be set 00:21:14.017 [2024-07-15 11:39:51.367141] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa30900 is same with the state(5) to be set 00:21:14.017 [2024-07-15 11:39:51.367149] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa30900 is same with the state(5) to be set 00:21:14.017 [2024-07-15 11:39:51.367157] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa30900 is same with the state(5) to be set 00:21:14.017 [2024-07-15 11:39:51.367164] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa30900 is same with the state(5) to be set 00:21:14.017 [2024-07-15 11:39:51.367172] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa30900 is same with the state(5) to be set 00:21:14.017 [2024-07-15 11:39:51.367180] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa30900 is same with the state(5) to be set 00:21:14.017 [2024-07-15 11:39:51.367188] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa30900 is same with the state(5) to be set 00:21:14.017 [2024-07-15 11:39:51.367199] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa30900 is same with the state(5) to be set 00:21:14.017 [2024-07-15 11:39:51.367208] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa30900 is same with the state(5) to be set 00:21:14.017 [2024-07-15 11:39:51.367216] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa30900 is same with the state(5) to be set 00:21:14.017 [2024-07-15 11:39:51.367225] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa30900 is same with the state(5) to be set 00:21:14.017 [2024-07-15 11:39:51.368840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:80648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.017 [2024-07-15 11:39:51.368880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.017 [2024-07-15 11:39:51.368904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:80656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.017 [2024-07-15 11:39:51.368915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.017 [2024-07-15 11:39:51.368927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:80664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.017 [2024-07-15 11:39:51.368936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.017 [2024-07-15 11:39:51.368949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:80672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.017 [2024-07-15 11:39:51.368958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.017 [2024-07-15 11:39:51.368969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:80680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.017 [2024-07-15 11:39:51.368979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.017 [2024-07-15 11:39:51.368990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:80688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.017 [2024-07-15 11:39:51.368999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.017 [2024-07-15 11:39:51.369011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:80696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.017 [2024-07-15 11:39:51.369020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.017 [2024-07-15 11:39:51.369032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:80704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.017 [2024-07-15 11:39:51.369041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.017 [2024-07-15 11:39:51.369052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:80712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.017 [2024-07-15 11:39:51.369062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.017 [2024-07-15 11:39:51.369074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:80720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.017 [2024-07-15 11:39:51.369083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.017 [2024-07-15 11:39:51.369094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:80728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.017 [2024-07-15 11:39:51.369104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.017 [2024-07-15 11:39:51.369115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:81112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:14.017 [2024-07-15 11:39:51.369125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.017 [2024-07-15 11:39:51.369136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:81120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:14.017 [2024-07-15 11:39:51.369145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.017 [2024-07-15 11:39:51.369157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:81128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:14.017 [2024-07-15 11:39:51.369167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.017 [2024-07-15 11:39:51.369178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:81136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:14.017 [2024-07-15 11:39:51.369187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.017 [2024-07-15 11:39:51.369198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:81144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:14.017 [2024-07-15 11:39:51.369207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.017 [2024-07-15 11:39:51.369218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:81152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:14.017 [2024-07-15 11:39:51.369229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.017 [2024-07-15 11:39:51.369241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:81160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:14.017 [2024-07-15 11:39:51.369250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.017 [2024-07-15 11:39:51.369261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:81168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:14.017 [2024-07-15 11:39:51.369270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.017 [2024-07-15 11:39:51.369281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:81176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:14.017 [2024-07-15 11:39:51.369291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.017 [2024-07-15 11:39:51.369302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:81184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:14.017 [2024-07-15 11:39:51.369311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.017 [2024-07-15 11:39:51.369322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:81192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:14.017 [2024-07-15 11:39:51.369331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.017 [2024-07-15 11:39:51.369342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:81200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:14.017 [2024-07-15 11:39:51.369351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.017 [2024-07-15 11:39:51.369362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:81208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:14.017 [2024-07-15 11:39:51.369371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.017 [2024-07-15 11:39:51.369382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:81216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:14.017 [2024-07-15 11:39:51.369391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.017 [2024-07-15 11:39:51.369403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:81224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:14.017 [2024-07-15 11:39:51.369413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.017 [2024-07-15 11:39:51.369424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:81232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:14.017 [2024-07-15 11:39:51.369433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.017 [2024-07-15 11:39:51.369444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:81240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:14.017 [2024-07-15 11:39:51.369453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.017 [2024-07-15 11:39:51.369464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:81248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:14.017 [2024-07-15 11:39:51.369473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.017 [2024-07-15 11:39:51.369484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:81256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:14.017 [2024-07-15 11:39:51.369494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.017 [2024-07-15 11:39:51.369505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:81264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:14.017 [2024-07-15 11:39:51.369515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.017 [2024-07-15 11:39:51.369526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:81272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:14.017 [2024-07-15 11:39:51.369535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.017 [2024-07-15 11:39:51.369558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:81280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:14.017 [2024-07-15 11:39:51.369570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.017 [2024-07-15 11:39:51.369582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:81288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:14.017 [2024-07-15 11:39:51.369591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.017 [2024-07-15 11:39:51.369603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:81296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:14.017 [2024-07-15 11:39:51.369612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.017 [2024-07-15 11:39:51.369623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:80736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.017 [2024-07-15 11:39:51.369632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.017 [2024-07-15 11:39:51.369644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:80744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.017 [2024-07-15 11:39:51.369653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.017 [2024-07-15 11:39:51.369664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:80752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.017 [2024-07-15 11:39:51.369673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.017 [2024-07-15 11:39:51.369685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:80760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.017 [2024-07-15 11:39:51.369694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.017 [2024-07-15 11:39:51.369706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:80768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.017 [2024-07-15 11:39:51.369715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.017 [2024-07-15 11:39:51.369726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:80776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.017 [2024-07-15 11:39:51.369735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.017 [2024-07-15 11:39:51.369747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:80784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.017 [2024-07-15 11:39:51.369757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.017 [2024-07-15 11:39:51.369768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:80792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.017 [2024-07-15 11:39:51.369777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.017 [2024-07-15 11:39:51.369788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:80800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.017 [2024-07-15 11:39:51.369798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.017 [2024-07-15 11:39:51.369810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:80808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.017 [2024-07-15 11:39:51.369819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.017 [2024-07-15 11:39:51.369830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:80816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.017 [2024-07-15 11:39:51.369839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.017 [2024-07-15 11:39:51.369851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:80824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.017 [2024-07-15 11:39:51.369860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.017 [2024-07-15 11:39:51.369871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:80832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.017 [2024-07-15 11:39:51.369881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.017 [2024-07-15 11:39:51.369892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:80840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.018 [2024-07-15 11:39:51.369902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.018 [2024-07-15 11:39:51.369913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:80848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.018 [2024-07-15 11:39:51.369922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.018 [2024-07-15 11:39:51.369947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:80856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.018 [2024-07-15 11:39:51.369958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.018 [2024-07-15 11:39:51.369969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:80864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.018 [2024-07-15 11:39:51.369979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.018 [2024-07-15 11:39:51.369990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:80872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.018 [2024-07-15 11:39:51.369999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.018 [2024-07-15 11:39:51.370011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:80880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.018 [2024-07-15 11:39:51.370020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.018 [2024-07-15 11:39:51.370031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:80888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.018 [2024-07-15 11:39:51.370040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.018 [2024-07-15 11:39:51.370051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:80896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.018 [2024-07-15 11:39:51.370060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.018 [2024-07-15 11:39:51.370071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:80904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.018 [2024-07-15 11:39:51.370080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.018 [2024-07-15 11:39:51.370092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:80912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.018 [2024-07-15 11:39:51.370101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.018 [2024-07-15 11:39:51.370113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:80920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.018 [2024-07-15 11:39:51.370122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.018 [2024-07-15 11:39:51.370133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:80928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.018 [2024-07-15 11:39:51.370143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.018 [2024-07-15 11:39:51.370154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:80936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.018 [2024-07-15 11:39:51.370163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.018 [2024-07-15 11:39:51.370175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:80944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.018 [2024-07-15 11:39:51.370184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.018 [2024-07-15 11:39:51.370195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:80952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.018 [2024-07-15 11:39:51.370204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.018 [2024-07-15 11:39:51.370216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:80960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.018 [2024-07-15 11:39:51.370225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.018 [2024-07-15 11:39:51.370237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:80968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.018 [2024-07-15 11:39:51.370246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.018 [2024-07-15 11:39:51.370257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:80976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.018 [2024-07-15 11:39:51.370266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.018 [2024-07-15 11:39:51.370277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:80984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.018 [2024-07-15 11:39:51.370286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.018 [2024-07-15 11:39:51.370297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:80992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.018 [2024-07-15 11:39:51.370307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.018 [2024-07-15 11:39:51.370318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:81000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.018 [2024-07-15 11:39:51.370327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.018 [2024-07-15 11:39:51.370339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:81008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.018 [2024-07-15 11:39:51.370348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.018 [2024-07-15 11:39:51.370360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:81016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.018 [2024-07-15 11:39:51.370369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.018 [2024-07-15 11:39:51.370380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:81024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.018 [2024-07-15 11:39:51.370389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.018 [2024-07-15 11:39:51.370401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:81032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.018 [2024-07-15 11:39:51.370410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.018 [2024-07-15 11:39:51.370426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:81040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.018 [2024-07-15 11:39:51.370436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.018 [2024-07-15 11:39:51.370447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:81048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.018 [2024-07-15 11:39:51.370457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.018 [2024-07-15 11:39:51.370468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:81304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:14.018 [2024-07-15 11:39:51.370478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.018 [2024-07-15 11:39:51.370489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:81312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:14.018 [2024-07-15 11:39:51.370498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.018 [2024-07-15 11:39:51.370509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:81320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:14.018 [2024-07-15 11:39:51.370519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.018 [2024-07-15 11:39:51.370530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:81328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:14.018 [2024-07-15 11:39:51.370539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.018 [2024-07-15 11:39:51.370561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:81336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:14.018 [2024-07-15 11:39:51.370572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.018 [2024-07-15 11:39:51.370583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:81344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:14.018 [2024-07-15 11:39:51.370592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.018 [2024-07-15 11:39:51.370603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:81352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:14.018 [2024-07-15 11:39:51.370613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.018 [2024-07-15 11:39:51.370624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:81360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:14.018 [2024-07-15 11:39:51.370634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.018 [2024-07-15 11:39:51.370645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:81368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:14.018 [2024-07-15 11:39:51.370654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.018 [2024-07-15 11:39:51.370665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:81056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.018 [2024-07-15 11:39:51.370674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.018 [2024-07-15 11:39:51.370686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:81064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.018 [2024-07-15 11:39:51.370695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.018 [2024-07-15 11:39:51.370706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:81072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.018 [2024-07-15 11:39:51.370715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.018 [2024-07-15 11:39:51.370727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:81080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.018 [2024-07-15 11:39:51.370736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.018 [2024-07-15 11:39:51.370748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:81088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.018 [2024-07-15 11:39:51.370757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.018 [2024-07-15 11:39:51.370770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:81096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.018 [2024-07-15 11:39:51.370780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.018 [2024-07-15 11:39:51.370791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:81104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.018 [2024-07-15 11:39:51.370801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.018 [2024-07-15 11:39:51.370812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:81376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:14.018 [2024-07-15 11:39:51.370821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.018 [2024-07-15 11:39:51.370832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:81384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:14.018 [2024-07-15 11:39:51.370842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.018 [2024-07-15 11:39:51.370853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:81392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:14.018 [2024-07-15 11:39:51.370862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.018 [2024-07-15 11:39:51.370873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:81400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:14.018 [2024-07-15 11:39:51.370882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.018 [2024-07-15 11:39:51.370893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:81408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:14.018 [2024-07-15 11:39:51.370903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.018 [2024-07-15 11:39:51.370913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:81416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:14.018 [2024-07-15 11:39:51.370923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.018 [2024-07-15 11:39:51.370934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:81424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:14.018 [2024-07-15 11:39:51.370944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.018 [2024-07-15 11:39:51.370955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:81432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:14.018 [2024-07-15 11:39:51.370964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.018 [2024-07-15 11:39:51.370975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:81440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:14.018 [2024-07-15 11:39:51.370984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.018 [2024-07-15 11:39:51.370996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:81448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:14.018 [2024-07-15 11:39:51.371005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.018 [2024-07-15 11:39:51.371021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:81456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:14.018 [2024-07-15 11:39:51.371031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.018 [2024-07-15 11:39:51.371042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:81464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:14.018 [2024-07-15 11:39:51.371051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.018 [2024-07-15 11:39:51.371062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:81472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:14.018 [2024-07-15 11:39:51.371071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.018 [2024-07-15 11:39:51.371082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:81480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:14.018 [2024-07-15 11:39:51.371092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.018 [2024-07-15 11:39:51.371105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:81488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:14.018 [2024-07-15 11:39:51.371114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.018 [2024-07-15 11:39:51.371126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:81496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:14.018 [2024-07-15 11:39:51.371135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.018 [2024-07-15 11:39:51.371146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:81504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:14.018 [2024-07-15 11:39:51.371155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.018 [2024-07-15 11:39:51.371166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:81512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:14.018 [2024-07-15 11:39:51.371175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.018 [2024-07-15 11:39:51.371186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:81520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:14.018 [2024-07-15 11:39:51.371195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.018 [2024-07-15 11:39:51.371206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:81528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:14.018 [2024-07-15 11:39:51.371216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.018 [2024-07-15 11:39:51.371227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:81536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:14.018 [2024-07-15 11:39:51.371236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.018 [2024-07-15 11:39:51.371247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:81544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:14.018 [2024-07-15 11:39:51.371256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.018 [2024-07-15 11:39:51.371268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:81552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:14.018 [2024-07-15 11:39:51.371277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.018 [2024-07-15 11:39:51.371289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:81560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:14.018 [2024-07-15 11:39:51.371298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.019 [2024-07-15 11:39:51.371309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:81568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:14.019 [2024-07-15 11:39:51.371318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.019 [2024-07-15 11:39:51.371329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:81576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:14.019 [2024-07-15 11:39:51.371338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.019 [2024-07-15 11:39:51.371351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:81584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:14.019 [2024-07-15 11:39:51.371360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.019 [2024-07-15 11:39:51.371371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:81592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:14.019 [2024-07-15 11:39:51.371381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.019 [2024-07-15 11:39:51.371410] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:14.019 [2024-07-15 11:39:51.371421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81600 len:8 PRP1 0x0 PRP2 0x0 00:21:14.019 [2024-07-15 11:39:51.371431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.019 [2024-07-15 11:39:51.371444] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:14.019 [2024-07-15 11:39:51.371453] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:14.019 [2024-07-15 11:39:51.371461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81608 len:8 PRP1 0x0 PRP2 0x0 00:21:14.019 [2024-07-15 11:39:51.371470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.019 [2024-07-15 11:39:51.371479] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:14.019 [2024-07-15 11:39:51.371487] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:14.019 [2024-07-15 11:39:51.371495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81616 len:8 PRP1 0x0 PRP2 0x0 00:21:14.019 [2024-07-15 11:39:51.371504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.019 [2024-07-15 11:39:51.371513] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:14.019 [2024-07-15 11:39:51.371520] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:14.019 [2024-07-15 11:39:51.371528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81624 len:8 PRP1 0x0 PRP2 0x0 00:21:14.019 [2024-07-15 11:39:51.371537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.019 [2024-07-15 11:39:51.371559] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:14.019 [2024-07-15 11:39:51.371568] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:14.019 [2024-07-15 11:39:51.371576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81632 len:8 PRP1 0x0 PRP2 0x0 00:21:14.019 [2024-07-15 11:39:51.371585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.019 [2024-07-15 11:39:51.371594] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:14.019 [2024-07-15 11:39:51.371602] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:14.019 [2024-07-15 11:39:51.371609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81640 len:8 PRP1 0x0 PRP2 0x0 00:21:14.019 [2024-07-15 11:39:51.371618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.019 [2024-07-15 11:39:51.371627] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:14.019 [2024-07-15 11:39:51.371634] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:14.019 [2024-07-15 11:39:51.371642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81648 len:8 PRP1 0x0 PRP2 0x0 00:21:14.019 [2024-07-15 11:39:51.371651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.019 [2024-07-15 11:39:51.371660] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:14.019 [2024-07-15 11:39:51.371670] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:14.019 [2024-07-15 11:39:51.371678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81656 len:8 PRP1 0x0 PRP2 0x0 00:21:14.019 [2024-07-15 11:39:51.371688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.019 [2024-07-15 11:39:51.371697] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:14.019 [2024-07-15 11:39:51.371704] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:14.019 [2024-07-15 11:39:51.371712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81664 len:8 PRP1 0x0 PRP2 0x0 00:21:14.019 [2024-07-15 11:39:51.371721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.019 [2024-07-15 11:39:51.371767] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1e438d0 was disconnected and freed. reset controller. 00:21:14.019 [2024-07-15 11:39:51.371864] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:14.019 [2024-07-15 11:39:51.371882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.019 [2024-07-15 11:39:51.371894] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:14.019 [2024-07-15 11:39:51.371903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.019 [2024-07-15 11:39:51.371913] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:14.019 [2024-07-15 11:39:51.371922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.019 [2024-07-15 11:39:51.371932] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:14.019 [2024-07-15 11:39:51.371941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.019 [2024-07-15 11:39:51.371951] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dd6240 is same with the state(5) to be set 00:21:14.019 [2024-07-15 11:39:51.372177] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:14.019 [2024-07-15 11:39:51.372200] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1dd6240 (9): Bad file descriptor 00:21:14.019 [2024-07-15 11:39:51.372305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:14.019 [2024-07-15 11:39:51.372328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dd6240 with addr=10.0.0.2, port=4420 00:21:14.019 [2024-07-15 11:39:51.372339] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dd6240 is same with the state(5) to be set 00:21:14.019 [2024-07-15 11:39:51.372358] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1dd6240 (9): Bad file descriptor 00:21:14.019 [2024-07-15 11:39:51.372375] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:14.019 [2024-07-15 11:39:51.372384] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:14.019 [2024-07-15 11:39:51.372394] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:14.019 [2024-07-15 11:39:51.372414] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:14.019 [2024-07-15 11:39:51.372425] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:14.019 11:39:51 nvmf_tcp.nvmf_timeout -- host/timeout.sh@56 -- # sleep 2 00:21:15.913 [2024-07-15 11:39:53.372798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:15.913 [2024-07-15 11:39:53.372879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dd6240 with addr=10.0.0.2, port=4420 00:21:15.913 [2024-07-15 11:39:53.372898] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dd6240 is same with the state(5) to be set 00:21:15.913 [2024-07-15 11:39:53.372927] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1dd6240 (9): Bad file descriptor 00:21:15.913 [2024-07-15 11:39:53.372948] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:15.913 [2024-07-15 11:39:53.372959] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:15.913 [2024-07-15 11:39:53.372969] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:15.913 [2024-07-15 11:39:53.372998] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:15.913 [2024-07-15 11:39:53.373010] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:16.170 11:39:53 nvmf_tcp.nvmf_timeout -- host/timeout.sh@57 -- # get_controller 00:21:16.170 11:39:53 nvmf_tcp.nvmf_timeout -- host/timeout.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:21:16.170 11:39:53 nvmf_tcp.nvmf_timeout -- host/timeout.sh@41 -- # jq -r '.[].name' 00:21:16.445 11:39:53 nvmf_tcp.nvmf_timeout -- host/timeout.sh@57 -- # [[ NVMe0 == \N\V\M\e\0 ]] 00:21:16.445 11:39:53 nvmf_tcp.nvmf_timeout -- host/timeout.sh@58 -- # get_bdev 00:21:16.445 11:39:53 nvmf_tcp.nvmf_timeout -- host/timeout.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs 00:21:16.445 11:39:53 nvmf_tcp.nvmf_timeout -- host/timeout.sh@37 -- # jq -r '.[].name' 00:21:17.037 11:39:54 nvmf_tcp.nvmf_timeout -- host/timeout.sh@58 -- # [[ NVMe0n1 == \N\V\M\e\0\n\1 ]] 00:21:17.038 11:39:54 nvmf_tcp.nvmf_timeout -- host/timeout.sh@61 -- # sleep 5 00:21:17.970 [2024-07-15 11:39:55.373187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:17.970 [2024-07-15 11:39:55.373272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dd6240 with addr=10.0.0.2, port=4420 00:21:17.970 [2024-07-15 11:39:55.373290] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dd6240 is same with the state(5) to be set 00:21:17.970 [2024-07-15 11:39:55.373544] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1dd6240 (9): Bad file descriptor 00:21:17.970 [2024-07-15 11:39:55.373581] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:17.970 [2024-07-15 11:39:55.373592] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:17.970 [2024-07-15 11:39:55.373603] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:17.970 [2024-07-15 11:39:55.373652] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:17.970 [2024-07-15 11:39:55.373667] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:19.924 [2024-07-15 11:39:57.373715] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:19.924 [2024-07-15 11:39:57.373791] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:19.924 [2024-07-15 11:39:57.373804] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:19.924 [2024-07-15 11:39:57.373815] nvme_ctrlr.c:1094:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] already in failed state 00:21:19.924 [2024-07-15 11:39:57.374075] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:21.299 00:21:21.299 Latency(us) 00:21:21.299 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:21.299 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:21:21.299 Verification LBA range: start 0x0 length 0x4000 00:21:21.299 NVMe0n1 : 8.15 1237.19 4.83 15.71 0.00 102008.94 2532.07 7015926.69 00:21:21.299 =================================================================================================================== 00:21:21.299 Total : 1237.19 4.83 15.71 0.00 102008.94 2532.07 7015926.69 00:21:21.299 0 00:21:21.865 11:39:59 nvmf_tcp.nvmf_timeout -- host/timeout.sh@62 -- # get_controller 00:21:21.865 11:39:59 nvmf_tcp.nvmf_timeout -- host/timeout.sh@41 -- # jq -r '.[].name' 00:21:21.865 11:39:59 nvmf_tcp.nvmf_timeout -- host/timeout.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:21:22.123 11:39:59 nvmf_tcp.nvmf_timeout -- host/timeout.sh@62 -- # [[ '' == '' ]] 00:21:22.123 11:39:59 nvmf_tcp.nvmf_timeout -- host/timeout.sh@63 -- # get_bdev 00:21:22.123 11:39:59 nvmf_tcp.nvmf_timeout -- host/timeout.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs 00:21:22.123 11:39:59 nvmf_tcp.nvmf_timeout -- host/timeout.sh@37 -- # jq -r '.[].name' 00:21:22.386 11:39:59 nvmf_tcp.nvmf_timeout -- host/timeout.sh@63 -- # [[ '' == '' ]] 00:21:22.386 11:39:59 nvmf_tcp.nvmf_timeout -- host/timeout.sh@65 -- # wait 96246 00:21:22.386 11:39:59 nvmf_tcp.nvmf_timeout -- host/timeout.sh@67 -- # killprocess 96200 00:21:22.386 11:39:59 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@948 -- # '[' -z 96200 ']' 00:21:22.386 11:39:59 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@952 -- # kill -0 96200 00:21:22.386 11:39:59 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@953 -- # uname 00:21:22.386 11:39:59 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:22.386 11:39:59 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 96200 00:21:22.386 killing process with pid 96200 00:21:22.386 Received shutdown signal, test time was about 9.521376 seconds 00:21:22.386 00:21:22.386 Latency(us) 00:21:22.386 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:22.386 =================================================================================================================== 00:21:22.386 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:22.386 11:39:59 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:21:22.386 11:39:59 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:21:22.386 11:39:59 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@966 -- # echo 'killing process with pid 96200' 00:21:22.386 11:39:59 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@967 -- # kill 96200 00:21:22.386 11:39:59 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@972 -- # wait 96200 00:21:22.662 11:39:59 nvmf_tcp.nvmf_timeout -- host/timeout.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:22.920 [2024-07-15 11:40:00.174398] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:22.920 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:22.920 11:40:00 nvmf_tcp.nvmf_timeout -- host/timeout.sh@74 -- # bdevperf_pid=96400 00:21:22.920 11:40:00 nvmf_tcp.nvmf_timeout -- host/timeout.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -f 00:21:22.920 11:40:00 nvmf_tcp.nvmf_timeout -- host/timeout.sh@76 -- # waitforlisten 96400 /var/tmp/bdevperf.sock 00:21:22.920 11:40:00 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@829 -- # '[' -z 96400 ']' 00:21:22.920 11:40:00 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:22.920 11:40:00 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:22.920 11:40:00 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:22.920 11:40:00 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:22.920 11:40:00 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:21:22.920 [2024-07-15 11:40:00.268836] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:21:22.920 [2024-07-15 11:40:00.268928] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid96400 ] 00:21:23.178 [2024-07-15 11:40:00.410775] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:23.178 [2024-07-15 11:40:00.499124] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:23.178 11:40:00 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:23.178 11:40:00 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@862 -- # return 0 00:21:23.178 11:40:00 nvmf_tcp.nvmf_timeout -- host/timeout.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:21:23.436 11:40:00 nvmf_tcp.nvmf_timeout -- host/timeout.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --fast-io-fail-timeout-sec 2 --reconnect-delay-sec 1 00:21:23.693 NVMe0n1 00:21:23.693 11:40:01 nvmf_tcp.nvmf_timeout -- host/timeout.sh@84 -- # rpc_pid=96434 00:21:23.693 11:40:01 nvmf_tcp.nvmf_timeout -- host/timeout.sh@83 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:23.693 11:40:01 nvmf_tcp.nvmf_timeout -- host/timeout.sh@86 -- # sleep 1 00:21:23.951 Running I/O for 10 seconds... 00:21:24.885 11:40:02 nvmf_tcp.nvmf_timeout -- host/timeout.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:25.146 [2024-07-15 11:40:02.424795] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc2eb50 is same with the state(5) to be set 00:21:25.146 [2024-07-15 11:40:02.424854] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc2eb50 is same with the state(5) to be set 00:21:25.146 [2024-07-15 11:40:02.424866] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc2eb50 is same with the state(5) to be set 00:21:25.146 [2024-07-15 11:40:02.424874] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc2eb50 is same with the state(5) to be set 00:21:25.146 [2024-07-15 11:40:02.424883] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc2eb50 is same with the state(5) to be set 00:21:25.146 [2024-07-15 11:40:02.424891] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc2eb50 is same with the state(5) to be set 00:21:25.146 [2024-07-15 11:40:02.424899] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc2eb50 is same with the state(5) to be set 00:21:25.146 [2024-07-15 11:40:02.424907] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc2eb50 is same with the state(5) to be set 00:21:25.146 [2024-07-15 11:40:02.424915] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc2eb50 is same with the state(5) to be set 00:21:25.146 [2024-07-15 11:40:02.424923] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc2eb50 is same with the state(5) to be set 00:21:25.146 [2024-07-15 11:40:02.424931] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc2eb50 is same with the state(5) to be set 00:21:25.146 [2024-07-15 11:40:02.424939] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc2eb50 is same with the state(5) to be set 00:21:25.146 [2024-07-15 11:40:02.424947] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc2eb50 is same with the state(5) to be set 00:21:25.146 [2024-07-15 11:40:02.424955] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc2eb50 is same with the state(5) to be set 00:21:25.146 [2024-07-15 11:40:02.424963] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc2eb50 is same with the state(5) to be set 00:21:25.146 [2024-07-15 11:40:02.424971] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc2eb50 is same with the state(5) to be set 00:21:25.146 [2024-07-15 11:40:02.424979] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc2eb50 is same with the state(5) to be set 00:21:25.146 [2024-07-15 11:40:02.424987] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc2eb50 is same with the state(5) to be set 00:21:25.146 [2024-07-15 11:40:02.424995] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc2eb50 is same with the state(5) to be set 00:21:25.146 [2024-07-15 11:40:02.425003] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc2eb50 is same with the state(5) to be set 00:21:25.146 [2024-07-15 11:40:02.425011] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc2eb50 is same with the state(5) to be set 00:21:25.146 [2024-07-15 11:40:02.425018] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc2eb50 is same with the state(5) to be set 00:21:25.146 [2024-07-15 11:40:02.425026] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc2eb50 is same with the state(5) to be set 00:21:25.146 [2024-07-15 11:40:02.425034] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc2eb50 is same with the state(5) to be set 00:21:25.146 [2024-07-15 11:40:02.425042] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc2eb50 is same with the state(5) to be set 00:21:25.146 [2024-07-15 11:40:02.425050] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc2eb50 is same with the state(5) to be set 00:21:25.146 [2024-07-15 11:40:02.425057] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc2eb50 is same with the state(5) to be set 00:21:25.146 [2024-07-15 11:40:02.425065] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc2eb50 is same with the state(5) to be set 00:21:25.146 [2024-07-15 11:40:02.425073] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc2eb50 is same with the state(5) to be set 00:21:25.146 [2024-07-15 11:40:02.425083] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc2eb50 is same with the state(5) to be set 00:21:25.146 [2024-07-15 11:40:02.425092] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc2eb50 is same with the state(5) to be set 00:21:25.146 [2024-07-15 11:40:02.425100] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc2eb50 is same with the state(5) to be set 00:21:25.146 [2024-07-15 11:40:02.425108] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc2eb50 is same with the state(5) to be set 00:21:25.146 [2024-07-15 11:40:02.425116] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc2eb50 is same with the state(5) to be set 00:21:25.146 [2024-07-15 11:40:02.425124] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc2eb50 is same with the state(5) to be set 00:21:25.146 [2024-07-15 11:40:02.425132] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc2eb50 is same with the state(5) to be set 00:21:25.146 [2024-07-15 11:40:02.425140] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc2eb50 is same with the state(5) to be set 00:21:25.146 [2024-07-15 11:40:02.425148] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc2eb50 is same with the state(5) to be set 00:21:25.146 [2024-07-15 11:40:02.425156] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc2eb50 is same with the state(5) to be set 00:21:25.146 [2024-07-15 11:40:02.425163] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc2eb50 is same with the state(5) to be set 00:21:25.146 [2024-07-15 11:40:02.425171] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc2eb50 is same with the state(5) to be set 00:21:25.146 [2024-07-15 11:40:02.425179] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc2eb50 is same with the state(5) to be set 00:21:25.146 [2024-07-15 11:40:02.425187] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc2eb50 is same with the state(5) to be set 00:21:25.146 [2024-07-15 11:40:02.425195] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc2eb50 is same with the state(5) to be set 00:21:25.146 [2024-07-15 11:40:02.425203] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc2eb50 is same with the state(5) to be set 00:21:25.146 [2024-07-15 11:40:02.425210] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc2eb50 is same with the state(5) to be set 00:21:25.146 [2024-07-15 11:40:02.425218] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc2eb50 is same with the state(5) to be set 00:21:25.146 [2024-07-15 11:40:02.425226] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc2eb50 is same with the state(5) to be set 00:21:25.146 [2024-07-15 11:40:02.425234] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc2eb50 is same with the state(5) to be set 00:21:25.146 [2024-07-15 11:40:02.425242] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc2eb50 is same with the state(5) to be set 00:21:25.146 [2024-07-15 11:40:02.425250] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc2eb50 is same with the state(5) to be set 00:21:25.146 [2024-07-15 11:40:02.425257] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc2eb50 is same with the state(5) to be set 00:21:25.146 [2024-07-15 11:40:02.425265] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc2eb50 is same with the state(5) to be set 00:21:25.146 [2024-07-15 11:40:02.425273] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc2eb50 is same with the state(5) to be set 00:21:25.146 [2024-07-15 11:40:02.425281] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc2eb50 is same with the state(5) to be set 00:21:25.146 [2024-07-15 11:40:02.425289] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc2eb50 is same with the state(5) to be set 00:21:25.146 [2024-07-15 11:40:02.425297] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc2eb50 is same with the state(5) to be set 00:21:25.146 [2024-07-15 11:40:02.425305] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc2eb50 is same with the state(5) to be set 00:21:25.146 [2024-07-15 11:40:02.425313] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc2eb50 is same with the state(5) to be set 00:21:25.146 [2024-07-15 11:40:02.425321] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc2eb50 is same with the state(5) to be set 00:21:25.146 [2024-07-15 11:40:02.425330] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc2eb50 is same with the state(5) to be set 00:21:25.146 [2024-07-15 11:40:02.425339] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc2eb50 is same with the state(5) to be set 00:21:25.146 [2024-07-15 11:40:02.425347] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc2eb50 is same with the state(5) to be set 00:21:25.146 [2024-07-15 11:40:02.425355] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc2eb50 is same with the state(5) to be set 00:21:25.146 [2024-07-15 11:40:02.425363] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc2eb50 is same with the state(5) to be set 00:21:25.146 [2024-07-15 11:40:02.425371] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc2eb50 is same with the state(5) to be set 00:21:25.146 [2024-07-15 11:40:02.425379] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc2eb50 is same with the state(5) to be set 00:21:25.146 [2024-07-15 11:40:02.425387] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc2eb50 is same with the state(5) to be set 00:21:25.146 [2024-07-15 11:40:02.425395] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc2eb50 is same with the state(5) to be set 00:21:25.146 [2024-07-15 11:40:02.425404] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc2eb50 is same with the state(5) to be set 00:21:25.146 [2024-07-15 11:40:02.425412] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc2eb50 is same with the state(5) to be set 00:21:25.147 [2024-07-15 11:40:02.425420] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc2eb50 is same with the state(5) to be set 00:21:25.147 [2024-07-15 11:40:02.425428] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc2eb50 is same with the state(5) to be set 00:21:25.147 [2024-07-15 11:40:02.425436] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc2eb50 is same with the state(5) to be set 00:21:25.147 [2024-07-15 11:40:02.425444] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc2eb50 is same with the state(5) to be set 00:21:25.147 [2024-07-15 11:40:02.425452] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc2eb50 is same with the state(5) to be set 00:21:25.147 [2024-07-15 11:40:02.425461] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc2eb50 is same with the state(5) to be set 00:21:25.147 [2024-07-15 11:40:02.425468] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc2eb50 is same with the state(5) to be set 00:21:25.147 [2024-07-15 11:40:02.425476] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc2eb50 is same with the state(5) to be set 00:21:25.147 [2024-07-15 11:40:02.425495] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc2eb50 is same with the state(5) to be set 00:21:25.147 [2024-07-15 11:40:02.425503] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc2eb50 is same with the state(5) to be set 00:21:25.147 [2024-07-15 11:40:02.425511] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc2eb50 is same with the state(5) to be set 00:21:25.147 [2024-07-15 11:40:02.425519] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc2eb50 is same with the state(5) to be set 00:21:25.147 [2024-07-15 11:40:02.425527] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc2eb50 is same with the state(5) to be set 00:21:25.147 [2024-07-15 11:40:02.425535] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc2eb50 is same with the state(5) to be set 00:21:25.147 [2024-07-15 11:40:02.425556] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc2eb50 is same with the state(5) to be set 00:21:25.147 [2024-07-15 11:40:02.425567] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc2eb50 is same with the state(5) to be set 00:21:25.147 [2024-07-15 11:40:02.425575] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc2eb50 is same with the state(5) to be set 00:21:25.147 [2024-07-15 11:40:02.425583] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc2eb50 is same with the state(5) to be set 00:21:25.147 [2024-07-15 11:40:02.425591] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc2eb50 is same with the state(5) to be set 00:21:25.147 [2024-07-15 11:40:02.425599] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc2eb50 is same with the state(5) to be set 00:21:25.147 [2024-07-15 11:40:02.425607] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc2eb50 is same with the state(5) to be set 00:21:25.147 [2024-07-15 11:40:02.425615] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc2eb50 is same with the state(5) to be set 00:21:25.147 [2024-07-15 11:40:02.425624] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc2eb50 is same with the state(5) to be set 00:21:25.147 [2024-07-15 11:40:02.425632] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc2eb50 is same with the state(5) to be set 00:21:25.147 [2024-07-15 11:40:02.425640] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc2eb50 is same with the state(5) to be set 00:21:25.147 [2024-07-15 11:40:02.425648] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc2eb50 is same with the state(5) to be set 00:21:25.147 [2024-07-15 11:40:02.427418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:79400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.147 [2024-07-15 11:40:02.427459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.147 [2024-07-15 11:40:02.427481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:79408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.147 [2024-07-15 11:40:02.427493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.147 [2024-07-15 11:40:02.427506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:79416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.147 [2024-07-15 11:40:02.427516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.147 [2024-07-15 11:40:02.427528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:79424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.147 [2024-07-15 11:40:02.427538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.147 [2024-07-15 11:40:02.427561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:79432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.147 [2024-07-15 11:40:02.427573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.147 [2024-07-15 11:40:02.427585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:79440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.147 [2024-07-15 11:40:02.427594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.147 [2024-07-15 11:40:02.427607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:79448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.147 [2024-07-15 11:40:02.427616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.147 [2024-07-15 11:40:02.427628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:79456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.147 [2024-07-15 11:40:02.427638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.147 [2024-07-15 11:40:02.427649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:79464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.147 [2024-07-15 11:40:02.427659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.147 [2024-07-15 11:40:02.427670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:79472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.147 [2024-07-15 11:40:02.427679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.147 [2024-07-15 11:40:02.427691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:79480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.147 [2024-07-15 11:40:02.427700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.147 [2024-07-15 11:40:02.427712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:79488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.147 [2024-07-15 11:40:02.427721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.147 [2024-07-15 11:40:02.427732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:79496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.147 [2024-07-15 11:40:02.427742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.147 [2024-07-15 11:40:02.427755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:79504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.147 [2024-07-15 11:40:02.427764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.147 [2024-07-15 11:40:02.427776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:79512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.147 [2024-07-15 11:40:02.427785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.147 [2024-07-15 11:40:02.427797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:79520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.147 [2024-07-15 11:40:02.427807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.147 [2024-07-15 11:40:02.427818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:79528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.147 [2024-07-15 11:40:02.427829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.147 [2024-07-15 11:40:02.427841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:79536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.147 [2024-07-15 11:40:02.427850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.147 [2024-07-15 11:40:02.427862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:79544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.147 [2024-07-15 11:40:02.427872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.147 [2024-07-15 11:40:02.427883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:79552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.147 [2024-07-15 11:40:02.427893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.147 [2024-07-15 11:40:02.427904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:79560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.147 [2024-07-15 11:40:02.427914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.147 [2024-07-15 11:40:02.427926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:79568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.147 [2024-07-15 11:40:02.427936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.147 [2024-07-15 11:40:02.427947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:79576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.147 [2024-07-15 11:40:02.427957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.147 [2024-07-15 11:40:02.427969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:79584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.147 [2024-07-15 11:40:02.427978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.147 [2024-07-15 11:40:02.427990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:79592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.147 [2024-07-15 11:40:02.427999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.147 [2024-07-15 11:40:02.428010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:79600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.147 [2024-07-15 11:40:02.428021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.147 [2024-07-15 11:40:02.428032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:79608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.147 [2024-07-15 11:40:02.428041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.147 [2024-07-15 11:40:02.428053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:79616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.147 [2024-07-15 11:40:02.428062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.147 [2024-07-15 11:40:02.428074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:79624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.148 [2024-07-15 11:40:02.428084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.148 [2024-07-15 11:40:02.428095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:79632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.148 [2024-07-15 11:40:02.428104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.148 [2024-07-15 11:40:02.428116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:79640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.148 [2024-07-15 11:40:02.428125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.148 [2024-07-15 11:40:02.428137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:79648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.148 [2024-07-15 11:40:02.428146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.148 [2024-07-15 11:40:02.428158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:79656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.148 [2024-07-15 11:40:02.428169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.148 [2024-07-15 11:40:02.428185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:79664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.148 [2024-07-15 11:40:02.428195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.148 [2024-07-15 11:40:02.428206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:79672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.148 [2024-07-15 11:40:02.428216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.148 [2024-07-15 11:40:02.428227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:79680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.148 [2024-07-15 11:40:02.428237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.148 [2024-07-15 11:40:02.428248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:79688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.148 [2024-07-15 11:40:02.428258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.148 [2024-07-15 11:40:02.428270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:79696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.148 [2024-07-15 11:40:02.428279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.148 [2024-07-15 11:40:02.428291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:79704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.148 [2024-07-15 11:40:02.428301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.148 [2024-07-15 11:40:02.428312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:79712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.148 [2024-07-15 11:40:02.428322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.148 [2024-07-15 11:40:02.428333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:79720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.148 [2024-07-15 11:40:02.428343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.148 [2024-07-15 11:40:02.428354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:79728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.148 [2024-07-15 11:40:02.428364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.148 [2024-07-15 11:40:02.428375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:79736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.148 [2024-07-15 11:40:02.428385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.148 [2024-07-15 11:40:02.428396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:79744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.148 [2024-07-15 11:40:02.428406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.148 [2024-07-15 11:40:02.428417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:79752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.148 [2024-07-15 11:40:02.428427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.148 [2024-07-15 11:40:02.428438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:79760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.148 [2024-07-15 11:40:02.428447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.148 [2024-07-15 11:40:02.428459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:79768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.148 [2024-07-15 11:40:02.428469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.148 [2024-07-15 11:40:02.428480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:79776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.148 [2024-07-15 11:40:02.428490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.148 [2024-07-15 11:40:02.428502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:79784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.148 [2024-07-15 11:40:02.428511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.148 [2024-07-15 11:40:02.428525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:79792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.148 [2024-07-15 11:40:02.428535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.148 [2024-07-15 11:40:02.428559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:79800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.148 [2024-07-15 11:40:02.428571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.148 [2024-07-15 11:40:02.428583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:79808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.148 [2024-07-15 11:40:02.428593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.148 [2024-07-15 11:40:02.428605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:79816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.148 [2024-07-15 11:40:02.428614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.148 [2024-07-15 11:40:02.428626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:79824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.148 [2024-07-15 11:40:02.428636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.148 [2024-07-15 11:40:02.428647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:79832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.148 [2024-07-15 11:40:02.428657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.148 [2024-07-15 11:40:02.428668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:79840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.148 [2024-07-15 11:40:02.428677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.148 [2024-07-15 11:40:02.428689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:79848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.148 [2024-07-15 11:40:02.428699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.148 [2024-07-15 11:40:02.428711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:79856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.148 [2024-07-15 11:40:02.428720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.148 [2024-07-15 11:40:02.428732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:79864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.148 [2024-07-15 11:40:02.428741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.148 [2024-07-15 11:40:02.428753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:79872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.148 [2024-07-15 11:40:02.428763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.148 [2024-07-15 11:40:02.428780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:79880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.148 [2024-07-15 11:40:02.428790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.148 [2024-07-15 11:40:02.428801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:79888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.148 [2024-07-15 11:40:02.428811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.148 [2024-07-15 11:40:02.428822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:79896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.148 [2024-07-15 11:40:02.428832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.148 [2024-07-15 11:40:02.428844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:79904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.148 [2024-07-15 11:40:02.428853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.148 [2024-07-15 11:40:02.428865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:79912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:25.148 [2024-07-15 11:40:02.428874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.148 [2024-07-15 11:40:02.428888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:79920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:25.148 [2024-07-15 11:40:02.428897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.148 [2024-07-15 11:40:02.428909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:79928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:25.148 [2024-07-15 11:40:02.428918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.148 [2024-07-15 11:40:02.428930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:79936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:25.148 [2024-07-15 11:40:02.428939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.148 [2024-07-15 11:40:02.428951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:79944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:25.148 [2024-07-15 11:40:02.428960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.148 [2024-07-15 11:40:02.428971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:79952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:25.148 [2024-07-15 11:40:02.428981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.149 [2024-07-15 11:40:02.428992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:79960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:25.149 [2024-07-15 11:40:02.429001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.149 [2024-07-15 11:40:02.429012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:79968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:25.149 [2024-07-15 11:40:02.429022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.149 [2024-07-15 11:40:02.429033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:79976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:25.149 [2024-07-15 11:40:02.429042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.149 [2024-07-15 11:40:02.429054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:79984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:25.149 [2024-07-15 11:40:02.429063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.149 [2024-07-15 11:40:02.429074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:79992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:25.149 [2024-07-15 11:40:02.429084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.149 [2024-07-15 11:40:02.429095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:80000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:25.149 [2024-07-15 11:40:02.429106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.149 [2024-07-15 11:40:02.429119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:80008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:25.149 [2024-07-15 11:40:02.429128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.149 [2024-07-15 11:40:02.429140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:80016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:25.149 [2024-07-15 11:40:02.429149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.149 [2024-07-15 11:40:02.429160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:80024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:25.149 [2024-07-15 11:40:02.429170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.149 [2024-07-15 11:40:02.429181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:80032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:25.149 [2024-07-15 11:40:02.429191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.149 [2024-07-15 11:40:02.429202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:80040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:25.149 [2024-07-15 11:40:02.429211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.149 [2024-07-15 11:40:02.429224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:80048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:25.149 [2024-07-15 11:40:02.429234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.149 [2024-07-15 11:40:02.429245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:80056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:25.149 [2024-07-15 11:40:02.429255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.149 [2024-07-15 11:40:02.429266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:80064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:25.149 [2024-07-15 11:40:02.429276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.149 [2024-07-15 11:40:02.429287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:80072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:25.149 [2024-07-15 11:40:02.429297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.149 [2024-07-15 11:40:02.429308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:80080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:25.149 [2024-07-15 11:40:02.429317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.149 [2024-07-15 11:40:02.429329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:80088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:25.149 [2024-07-15 11:40:02.429338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.149 [2024-07-15 11:40:02.429349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:80096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:25.149 [2024-07-15 11:40:02.429359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.149 [2024-07-15 11:40:02.429370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:80104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:25.149 [2024-07-15 11:40:02.429380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.149 [2024-07-15 11:40:02.429391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:80112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:25.149 [2024-07-15 11:40:02.429401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.149 [2024-07-15 11:40:02.429412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:80120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:25.149 [2024-07-15 11:40:02.429422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.149 [2024-07-15 11:40:02.429433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:80128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:25.149 [2024-07-15 11:40:02.429442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.149 [2024-07-15 11:40:02.429454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:80136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:25.149 [2024-07-15 11:40:02.429464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.149 [2024-07-15 11:40:02.429475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:80144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:25.149 [2024-07-15 11:40:02.429485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.149 [2024-07-15 11:40:02.429496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:80152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:25.149 [2024-07-15 11:40:02.429505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.149 [2024-07-15 11:40:02.429517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:80160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:25.149 [2024-07-15 11:40:02.429526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.149 [2024-07-15 11:40:02.429537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:80168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:25.149 [2024-07-15 11:40:02.429558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.149 [2024-07-15 11:40:02.429573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:80176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:25.149 [2024-07-15 11:40:02.429583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.149 [2024-07-15 11:40:02.429594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:80184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:25.149 [2024-07-15 11:40:02.429604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.149 [2024-07-15 11:40:02.429615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:80192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:25.149 [2024-07-15 11:40:02.429625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.149 [2024-07-15 11:40:02.429636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:80200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:25.149 [2024-07-15 11:40:02.429646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.149 [2024-07-15 11:40:02.429658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:80208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:25.149 [2024-07-15 11:40:02.429667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.149 [2024-07-15 11:40:02.429679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:80216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:25.149 [2024-07-15 11:40:02.429689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.149 [2024-07-15 11:40:02.429700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:80224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:25.149 [2024-07-15 11:40:02.429709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.149 [2024-07-15 11:40:02.429720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:80232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:25.149 [2024-07-15 11:40:02.429730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.149 [2024-07-15 11:40:02.429741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:80240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:25.149 [2024-07-15 11:40:02.429751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.149 [2024-07-15 11:40:02.429762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:80248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:25.149 [2024-07-15 11:40:02.429772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.149 [2024-07-15 11:40:02.429784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:80256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:25.149 [2024-07-15 11:40:02.429794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.149 [2024-07-15 11:40:02.429805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:80264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:25.149 [2024-07-15 11:40:02.429815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.149 [2024-07-15 11:40:02.429827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:80272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:25.149 [2024-07-15 11:40:02.429836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.149 [2024-07-15 11:40:02.429848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:80280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:25.149 [2024-07-15 11:40:02.429857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.149 [2024-07-15 11:40:02.429868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:80288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:25.150 [2024-07-15 11:40:02.429878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.150 [2024-07-15 11:40:02.429908] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:25.150 [2024-07-15 11:40:02.429920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80296 len:8 PRP1 0x0 PRP2 0x0 00:21:25.150 [2024-07-15 11:40:02.429945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.150 [2024-07-15 11:40:02.429967] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:25.150 [2024-07-15 11:40:02.429980] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:25.150 [2024-07-15 11:40:02.429993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80304 len:8 PRP1 0x0 PRP2 0x0 00:21:25.150 [2024-07-15 11:40:02.430004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.150 [2024-07-15 11:40:02.430014] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:25.150 [2024-07-15 11:40:02.430022] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:25.150 [2024-07-15 11:40:02.430030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80312 len:8 PRP1 0x0 PRP2 0x0 00:21:25.150 [2024-07-15 11:40:02.430039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.150 [2024-07-15 11:40:02.430049] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:25.150 [2024-07-15 11:40:02.430056] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:25.150 [2024-07-15 11:40:02.430064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80320 len:8 PRP1 0x0 PRP2 0x0 00:21:25.150 [2024-07-15 11:40:02.430073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.150 [2024-07-15 11:40:02.430083] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:25.150 [2024-07-15 11:40:02.430090] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:25.150 [2024-07-15 11:40:02.430098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80328 len:8 PRP1 0x0 PRP2 0x0 00:21:25.150 [2024-07-15 11:40:02.430107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.150 [2024-07-15 11:40:02.430117] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:25.150 [2024-07-15 11:40:02.430125] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:25.150 [2024-07-15 11:40:02.430133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80336 len:8 PRP1 0x0 PRP2 0x0 00:21:25.150 [2024-07-15 11:40:02.430142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.150 [2024-07-15 11:40:02.430151] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:25.150 [2024-07-15 11:40:02.430159] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:25.150 [2024-07-15 11:40:02.430167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80344 len:8 PRP1 0x0 PRP2 0x0 00:21:25.150 [2024-07-15 11:40:02.430176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.150 [2024-07-15 11:40:02.430186] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:25.150 [2024-07-15 11:40:02.430193] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:25.150 [2024-07-15 11:40:02.430201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80352 len:8 PRP1 0x0 PRP2 0x0 00:21:25.150 [2024-07-15 11:40:02.430211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.150 [2024-07-15 11:40:02.430220] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:25.150 [2024-07-15 11:40:02.430229] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:25.150 [2024-07-15 11:40:02.430237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80360 len:8 PRP1 0x0 PRP2 0x0 00:21:25.150 [2024-07-15 11:40:02.430248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.150 [2024-07-15 11:40:02.430258] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:25.150 [2024-07-15 11:40:02.430265] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:25.150 [2024-07-15 11:40:02.430274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80368 len:8 PRP1 0x0 PRP2 0x0 00:21:25.150 [2024-07-15 11:40:02.430283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.150 [2024-07-15 11:40:02.430292] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:25.150 [2024-07-15 11:40:02.430300] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:25.150 [2024-07-15 11:40:02.430308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80376 len:8 PRP1 0x0 PRP2 0x0 00:21:25.150 [2024-07-15 11:40:02.430317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.150 [2024-07-15 11:40:02.430327] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:25.150 [2024-07-15 11:40:02.430334] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:25.150 [2024-07-15 11:40:02.430342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80384 len:8 PRP1 0x0 PRP2 0x0 00:21:25.150 [2024-07-15 11:40:02.430352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.150 [2024-07-15 11:40:02.430361] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:25.150 [2024-07-15 11:40:02.430368] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:25.150 [2024-07-15 11:40:02.430376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80392 len:8 PRP1 0x0 PRP2 0x0 00:21:25.150 [2024-07-15 11:40:02.430385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.150 [2024-07-15 11:40:02.430395] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:25.150 [2024-07-15 11:40:02.430403] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:25.150 [2024-07-15 11:40:02.430411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80400 len:8 PRP1 0x0 PRP2 0x0 00:21:25.150 [2024-07-15 11:40:02.430420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.150 [2024-07-15 11:40:02.442215] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:25.150 [2024-07-15 11:40:02.442251] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:25.150 [2024-07-15 11:40:02.442264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80408 len:8 PRP1 0x0 PRP2 0x0 00:21:25.150 [2024-07-15 11:40:02.442278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.150 [2024-07-15 11:40:02.442290] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:25.150 [2024-07-15 11:40:02.442298] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:25.150 [2024-07-15 11:40:02.442307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80416 len:8 PRP1 0x0 PRP2 0x0 00:21:25.150 [2024-07-15 11:40:02.442316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.150 [2024-07-15 11:40:02.442377] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x14ef8d0 was disconnected and freed. reset controller. 00:21:25.150 [2024-07-15 11:40:02.442498] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:25.150 [2024-07-15 11:40:02.442516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.150 [2024-07-15 11:40:02.442530] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:25.150 [2024-07-15 11:40:02.442540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.150 [2024-07-15 11:40:02.442567] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:25.150 [2024-07-15 11:40:02.442577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.150 [2024-07-15 11:40:02.442588] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:25.151 [2024-07-15 11:40:02.442597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.151 [2024-07-15 11:40:02.442606] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1482240 is same with the state(5) to be set 00:21:25.151 [2024-07-15 11:40:02.442862] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:25.151 [2024-07-15 11:40:02.442889] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1482240 (9): Bad file descriptor 00:21:25.151 [2024-07-15 11:40:02.442991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:25.151 [2024-07-15 11:40:02.443013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1482240 with addr=10.0.0.2, port=4420 00:21:25.151 [2024-07-15 11:40:02.443024] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1482240 is same with the state(5) to be set 00:21:25.151 [2024-07-15 11:40:02.443044] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1482240 (9): Bad file descriptor 00:21:25.151 [2024-07-15 11:40:02.443060] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:25.151 [2024-07-15 11:40:02.443070] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:25.151 [2024-07-15 11:40:02.443081] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:25.151 [2024-07-15 11:40:02.443101] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:25.151 [2024-07-15 11:40:02.443112] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:25.151 11:40:02 nvmf_tcp.nvmf_timeout -- host/timeout.sh@90 -- # sleep 1 00:21:26.082 [2024-07-15 11:40:03.443270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:26.082 [2024-07-15 11:40:03.443354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1482240 with addr=10.0.0.2, port=4420 00:21:26.082 [2024-07-15 11:40:03.443371] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1482240 is same with the state(5) to be set 00:21:26.082 [2024-07-15 11:40:03.443401] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1482240 (9): Bad file descriptor 00:21:26.082 [2024-07-15 11:40:03.443421] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:26.082 [2024-07-15 11:40:03.443432] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:26.082 [2024-07-15 11:40:03.443444] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:26.082 [2024-07-15 11:40:03.443473] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:26.082 [2024-07-15 11:40:03.443486] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:26.082 11:40:03 nvmf_tcp.nvmf_timeout -- host/timeout.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:26.338 [2024-07-15 11:40:03.709492] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:26.338 11:40:03 nvmf_tcp.nvmf_timeout -- host/timeout.sh@92 -- # wait 96434 00:21:27.269 [2024-07-15 11:40:04.460347] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:21:33.835 00:21:33.835 Latency(us) 00:21:33.835 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:33.835 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:21:33.835 Verification LBA range: start 0x0 length 0x4000 00:21:33.835 NVMe0n1 : 10.01 5813.70 22.71 0.00 0.00 21977.84 2293.76 3035150.89 00:21:33.835 =================================================================================================================== 00:21:33.835 Total : 5813.70 22.71 0.00 0.00 21977.84 2293.76 3035150.89 00:21:33.835 0 00:21:33.835 11:40:11 nvmf_tcp.nvmf_timeout -- host/timeout.sh@97 -- # rpc_pid=96550 00:21:33.835 11:40:11 nvmf_tcp.nvmf_timeout -- host/timeout.sh@96 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:33.835 11:40:11 nvmf_tcp.nvmf_timeout -- host/timeout.sh@98 -- # sleep 1 00:21:34.093 Running I/O for 10 seconds... 00:21:35.028 11:40:12 nvmf_tcp.nvmf_timeout -- host/timeout.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:35.289 [2024-07-15 11:40:12.567160] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa87660 is same with the state(5) to be set 00:21:35.289 [2024-07-15 11:40:12.567219] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa87660 is same with the state(5) to be set 00:21:35.289 [2024-07-15 11:40:12.567230] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa87660 is same with the state(5) to be set 00:21:35.289 [2024-07-15 11:40:12.567239] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa87660 is same with the state(5) to be set 00:21:35.289 [2024-07-15 11:40:12.567247] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa87660 is same with the state(5) to be set 00:21:35.289 [2024-07-15 11:40:12.567256] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa87660 is same with the state(5) to be set 00:21:35.289 [2024-07-15 11:40:12.567264] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa87660 is same with the state(5) to be set 00:21:35.289 [2024-07-15 11:40:12.567272] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa87660 is same with the state(5) to be set 00:21:35.289 [2024-07-15 11:40:12.567280] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa87660 is same with the state(5) to be set 00:21:35.289 [2024-07-15 11:40:12.567289] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa87660 is same with the state(5) to be set 00:21:35.289 [2024-07-15 11:40:12.567297] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa87660 is same with the state(5) to be set 00:21:35.289 [2024-07-15 11:40:12.567306] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa87660 is same with the state(5) to be set 00:21:35.289 [2024-07-15 11:40:12.567314] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa87660 is same with the state(5) to be set 00:21:35.289 [2024-07-15 11:40:12.567322] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa87660 is same with the state(5) to be set 00:21:35.289 [2024-07-15 11:40:12.567330] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa87660 is same with the state(5) to be set 00:21:35.289 [2024-07-15 11:40:12.567338] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa87660 is same with the state(5) to be set 00:21:35.289 [2024-07-15 11:40:12.567346] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa87660 is same with the state(5) to be set 00:21:35.289 [2024-07-15 11:40:12.567354] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa87660 is same with the state(5) to be set 00:21:35.289 [2024-07-15 11:40:12.567362] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa87660 is same with the state(5) to be set 00:21:35.289 [2024-07-15 11:40:12.567370] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa87660 is same with the state(5) to be set 00:21:35.289 [2024-07-15 11:40:12.567378] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa87660 is same with the state(5) to be set 00:21:35.289 [2024-07-15 11:40:12.567386] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa87660 is same with the state(5) to be set 00:21:35.289 [2024-07-15 11:40:12.567394] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa87660 is same with the state(5) to be set 00:21:35.289 [2024-07-15 11:40:12.567402] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa87660 is same with the state(5) to be set 00:21:35.289 [2024-07-15 11:40:12.567410] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa87660 is same with the state(5) to be set 00:21:35.289 [2024-07-15 11:40:12.567419] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa87660 is same with the state(5) to be set 00:21:35.289 [2024-07-15 11:40:12.567427] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa87660 is same with the state(5) to be set 00:21:35.289 [2024-07-15 11:40:12.567435] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa87660 is same with the state(5) to be set 00:21:35.289 [2024-07-15 11:40:12.567444] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa87660 is same with the state(5) to be set 00:21:35.289 [2024-07-15 11:40:12.567453] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa87660 is same with the state(5) to be set 00:21:35.289 [2024-07-15 11:40:12.567461] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa87660 is same with the state(5) to be set 00:21:35.289 [2024-07-15 11:40:12.567469] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa87660 is same with the state(5) to be set 00:21:35.289 [2024-07-15 11:40:12.567478] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa87660 is same with the state(5) to be set 00:21:35.289 [2024-07-15 11:40:12.567486] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa87660 is same with the state(5) to be set 00:21:35.289 [2024-07-15 11:40:12.567496] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa87660 is same with the state(5) to be set 00:21:35.289 [2024-07-15 11:40:12.567504] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa87660 is same with the state(5) to be set 00:21:35.289 [2024-07-15 11:40:12.567513] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa87660 is same with the state(5) to be set 00:21:35.289 [2024-07-15 11:40:12.567521] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa87660 is same with the state(5) to be set 00:21:35.289 [2024-07-15 11:40:12.567529] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa87660 is same with the state(5) to be set 00:21:35.289 [2024-07-15 11:40:12.567537] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa87660 is same with the state(5) to be set 00:21:35.289 [2024-07-15 11:40:12.567560] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa87660 is same with the state(5) to be set 00:21:35.289 [2024-07-15 11:40:12.567570] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa87660 is same with the state(5) to be set 00:21:35.289 [2024-07-15 11:40:12.567579] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa87660 is same with the state(5) to be set 00:21:35.289 [2024-07-15 11:40:12.567587] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa87660 is same with the state(5) to be set 00:21:35.289 [2024-07-15 11:40:12.567595] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa87660 is same with the state(5) to be set 00:21:35.289 [2024-07-15 11:40:12.567603] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa87660 is same with the state(5) to be set 00:21:35.289 [2024-07-15 11:40:12.567611] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa87660 is same with the state(5) to be set 00:21:35.289 [2024-07-15 11:40:12.567619] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa87660 is same with the state(5) to be set 00:21:35.289 [2024-07-15 11:40:12.567627] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa87660 is same with the state(5) to be set 00:21:35.289 [2024-07-15 11:40:12.567635] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa87660 is same with the state(5) to be set 00:21:35.289 [2024-07-15 11:40:12.567643] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa87660 is same with the state(5) to be set 00:21:35.289 [2024-07-15 11:40:12.567651] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa87660 is same with the state(5) to be set 00:21:35.289 [2024-07-15 11:40:12.567659] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa87660 is same with the state(5) to be set 00:21:35.289 [2024-07-15 11:40:12.567667] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa87660 is same with the state(5) to be set 00:21:35.289 [2024-07-15 11:40:12.567675] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa87660 is same with the state(5) to be set 00:21:35.289 [2024-07-15 11:40:12.567684] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa87660 is same with the state(5) to be set 00:21:35.289 [2024-07-15 11:40:12.567692] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa87660 is same with the state(5) to be set 00:21:35.289 [2024-07-15 11:40:12.567700] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa87660 is same with the state(5) to be set 00:21:35.289 [2024-07-15 11:40:12.567708] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa87660 is same with the state(5) to be set 00:21:35.289 [2024-07-15 11:40:12.567716] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa87660 is same with the state(5) to be set 00:21:35.289 [2024-07-15 11:40:12.567724] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa87660 is same with the state(5) to be set 00:21:35.289 [2024-07-15 11:40:12.567733] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa87660 is same with the state(5) to be set 00:21:35.289 [2024-07-15 11:40:12.567741] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa87660 is same with the state(5) to be set 00:21:35.289 [2024-07-15 11:40:12.567750] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa87660 is same with the state(5) to be set 00:21:35.289 [2024-07-15 11:40:12.567758] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa87660 is same with the state(5) to be set 00:21:35.289 [2024-07-15 11:40:12.567766] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa87660 is same with the state(5) to be set 00:21:35.289 [2024-07-15 11:40:12.567775] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa87660 is same with the state(5) to be set 00:21:35.289 [2024-07-15 11:40:12.567789] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa87660 is same with the state(5) to be set 00:21:35.289 [2024-07-15 11:40:12.567800] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa87660 is same with the state(5) to be set 00:21:35.289 [2024-07-15 11:40:12.567808] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa87660 is same with the state(5) to be set 00:21:35.289 [2024-07-15 11:40:12.567816] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa87660 is same with the state(5) to be set 00:21:35.289 [2024-07-15 11:40:12.567824] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa87660 is same with the state(5) to be set 00:21:35.289 [2024-07-15 11:40:12.567832] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa87660 is same with the state(5) to be set 00:21:35.289 [2024-07-15 11:40:12.567840] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa87660 is same with the state(5) to be set 00:21:35.289 [2024-07-15 11:40:12.567848] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa87660 is same with the state(5) to be set 00:21:35.289 [2024-07-15 11:40:12.567856] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa87660 is same with the state(5) to be set 00:21:35.289 [2024-07-15 11:40:12.567864] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa87660 is same with the state(5) to be set 00:21:35.289 [2024-07-15 11:40:12.567872] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa87660 is same with the state(5) to be set 00:21:35.289 [2024-07-15 11:40:12.567880] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa87660 is same with the state(5) to be set 00:21:35.289 [2024-07-15 11:40:12.567888] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa87660 is same with the state(5) to be set 00:21:35.289 [2024-07-15 11:40:12.567896] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa87660 is same with the state(5) to be set 00:21:35.289 [2024-07-15 11:40:12.567904] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa87660 is same with the state(5) to be set 00:21:35.289 [2024-07-15 11:40:12.567912] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa87660 is same with the state(5) to be set 00:21:35.289 [2024-07-15 11:40:12.567920] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa87660 is same with the state(5) to be set 00:21:35.289 [2024-07-15 11:40:12.567928] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa87660 is same with the state(5) to be set 00:21:35.289 [2024-07-15 11:40:12.567936] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa87660 is same with the state(5) to be set 00:21:35.289 [2024-07-15 11:40:12.567944] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa87660 is same with the state(5) to be set 00:21:35.289 [2024-07-15 11:40:12.567951] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa87660 is same with the state(5) to be set 00:21:35.289 [2024-07-15 11:40:12.567959] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa87660 is same with the state(5) to be set 00:21:35.290 [2024-07-15 11:40:12.567967] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa87660 is same with the state(5) to be set 00:21:35.290 [2024-07-15 11:40:12.567975] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa87660 is same with the state(5) to be set 00:21:35.290 [2024-07-15 11:40:12.567983] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa87660 is same with the state(5) to be set 00:21:35.290 [2024-07-15 11:40:12.567991] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa87660 is same with the state(5) to be set 00:21:35.290 [2024-07-15 11:40:12.567998] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa87660 is same with the state(5) to be set 00:21:35.290 [2024-07-15 11:40:12.568006] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa87660 is same with the state(5) to be set 00:21:35.290 [2024-07-15 11:40:12.568014] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa87660 is same with the state(5) to be set 00:21:35.290 [2024-07-15 11:40:12.568022] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa87660 is same with the state(5) to be set 00:21:35.290 [2024-07-15 11:40:12.568704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:80240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.290 [2024-07-15 11:40:12.568743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:35.290 [2024-07-15 11:40:12.568767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:80248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.290 [2024-07-15 11:40:12.568778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:35.290 [2024-07-15 11:40:12.568792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:80256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.290 [2024-07-15 11:40:12.568801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:35.290 [2024-07-15 11:40:12.568813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:80264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.290 [2024-07-15 11:40:12.568824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:35.290 [2024-07-15 11:40:12.568836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:80272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.290 [2024-07-15 11:40:12.568845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:35.290 [2024-07-15 11:40:12.568857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:80280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.290 [2024-07-15 11:40:12.568866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:35.290 [2024-07-15 11:40:12.568879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:80288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.290 [2024-07-15 11:40:12.568888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:35.290 [2024-07-15 11:40:12.568900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:80296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.290 [2024-07-15 11:40:12.568909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:35.290 [2024-07-15 11:40:12.568920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:80304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.290 [2024-07-15 11:40:12.568930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:35.290 [2024-07-15 11:40:12.568942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:80312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.290 [2024-07-15 11:40:12.568951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:35.290 [2024-07-15 11:40:12.568963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:80320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.290 [2024-07-15 11:40:12.568972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:35.290 [2024-07-15 11:40:12.568984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:80328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.290 [2024-07-15 11:40:12.568994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:35.290 [2024-07-15 11:40:12.569005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:80336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.290 [2024-07-15 11:40:12.569015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:35.290 [2024-07-15 11:40:12.569027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:80344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.290 [2024-07-15 11:40:12.569037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:35.290 [2024-07-15 11:40:12.569049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:80352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.290 [2024-07-15 11:40:12.569058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:35.290 [2024-07-15 11:40:12.569070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:80360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.290 [2024-07-15 11:40:12.569079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:35.290 [2024-07-15 11:40:12.569091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:80368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.290 [2024-07-15 11:40:12.569101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:35.290 [2024-07-15 11:40:12.569113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:80376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.290 [2024-07-15 11:40:12.569123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:35.290 [2024-07-15 11:40:12.569134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:80384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.290 [2024-07-15 11:40:12.569144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:35.290 [2024-07-15 11:40:12.569156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:80392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.290 [2024-07-15 11:40:12.569166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:35.290 [2024-07-15 11:40:12.569177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:80400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.290 [2024-07-15 11:40:12.569186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:35.290 [2024-07-15 11:40:12.569198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:80408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.290 [2024-07-15 11:40:12.569207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:35.290 [2024-07-15 11:40:12.569219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:80416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.290 [2024-07-15 11:40:12.569228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:35.290 [2024-07-15 11:40:12.569239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:80424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.290 [2024-07-15 11:40:12.569249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:35.290 [2024-07-15 11:40:12.569260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:80432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.290 [2024-07-15 11:40:12.569269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:35.290 [2024-07-15 11:40:12.569281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:80440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.290 [2024-07-15 11:40:12.569291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:35.290 [2024-07-15 11:40:12.569303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:80448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.290 [2024-07-15 11:40:12.569312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:35.290 [2024-07-15 11:40:12.569324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:80456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.290 [2024-07-15 11:40:12.569333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:35.290 [2024-07-15 11:40:12.569345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:80464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.290 [2024-07-15 11:40:12.569354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:35.290 [2024-07-15 11:40:12.569365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:80472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.290 [2024-07-15 11:40:12.569375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:35.290 [2024-07-15 11:40:12.569386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:80480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.290 [2024-07-15 11:40:12.569396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:35.290 [2024-07-15 11:40:12.569408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:80488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.290 [2024-07-15 11:40:12.569417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:35.290 [2024-07-15 11:40:12.569429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:80496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.290 [2024-07-15 11:40:12.569440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:35.290 [2024-07-15 11:40:12.569451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:80504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.290 [2024-07-15 11:40:12.569461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:35.290 [2024-07-15 11:40:12.569472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:80512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.290 [2024-07-15 11:40:12.569482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:35.290 [2024-07-15 11:40:12.569493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:80520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.290 [2024-07-15 11:40:12.569502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:35.290 [2024-07-15 11:40:12.569514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:80528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.290 [2024-07-15 11:40:12.569523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:35.290 [2024-07-15 11:40:12.569535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:80536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.290 [2024-07-15 11:40:12.569556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:35.290 [2024-07-15 11:40:12.569571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:80544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.291 [2024-07-15 11:40:12.569581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:35.291 [2024-07-15 11:40:12.569593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:80552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.291 [2024-07-15 11:40:12.569602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:35.291 [2024-07-15 11:40:12.569614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:80560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.291 [2024-07-15 11:40:12.569624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:35.291 [2024-07-15 11:40:12.569635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:80568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.291 [2024-07-15 11:40:12.569645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:35.291 [2024-07-15 11:40:12.569656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:80576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.291 [2024-07-15 11:40:12.569666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:35.291 [2024-07-15 11:40:12.569677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:80584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.291 [2024-07-15 11:40:12.569686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:35.291 [2024-07-15 11:40:12.569698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:80592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.291 [2024-07-15 11:40:12.569707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:35.291 [2024-07-15 11:40:12.569720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:80600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.291 [2024-07-15 11:40:12.569729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:35.291 [2024-07-15 11:40:12.569741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:80608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.291 [2024-07-15 11:40:12.569750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:35.291 [2024-07-15 11:40:12.569762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:80616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.291 [2024-07-15 11:40:12.569771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:35.291 [2024-07-15 11:40:12.569782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:80624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.291 [2024-07-15 11:40:12.569792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:35.291 [2024-07-15 11:40:12.569804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:80632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.291 [2024-07-15 11:40:12.569813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:35.291 [2024-07-15 11:40:12.569824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:80640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.291 [2024-07-15 11:40:12.569835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:35.291 [2024-07-15 11:40:12.569847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:80648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.291 [2024-07-15 11:40:12.569856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:35.291 [2024-07-15 11:40:12.569868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:80656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.291 [2024-07-15 11:40:12.569878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:35.291 [2024-07-15 11:40:12.569890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:80664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.291 [2024-07-15 11:40:12.569899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:35.291 [2024-07-15 11:40:12.569911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:80672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.291 [2024-07-15 11:40:12.569920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:35.291 [2024-07-15 11:40:12.569932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:80680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.291 [2024-07-15 11:40:12.569954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:35.291 [2024-07-15 11:40:12.569966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:80688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.291 [2024-07-15 11:40:12.569976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:35.291 [2024-07-15 11:40:12.569987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:80696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.291 [2024-07-15 11:40:12.569997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:35.291 [2024-07-15 11:40:12.570008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:80704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.291 [2024-07-15 11:40:12.570018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:35.291 [2024-07-15 11:40:12.570030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:80712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.291 [2024-07-15 11:40:12.570039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:35.291 [2024-07-15 11:40:12.570051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:80720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.291 [2024-07-15 11:40:12.570060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:35.291 [2024-07-15 11:40:12.570073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:80728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.291 [2024-07-15 11:40:12.570083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:35.291 [2024-07-15 11:40:12.570094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:80736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.291 [2024-07-15 11:40:12.570104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:35.291 [2024-07-15 11:40:12.570116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:80744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.291 [2024-07-15 11:40:12.570126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:35.291 [2024-07-15 11:40:12.570137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:80752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.291 [2024-07-15 11:40:12.570147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:35.291 [2024-07-15 11:40:12.570158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:80936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:35.291 [2024-07-15 11:40:12.570168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:35.291 [2024-07-15 11:40:12.570180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:80944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:35.291 [2024-07-15 11:40:12.570189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:35.291 [2024-07-15 11:40:12.570201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:80952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:35.291 [2024-07-15 11:40:12.570210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:35.291 [2024-07-15 11:40:12.570222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:80960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:35.291 [2024-07-15 11:40:12.570231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:35.291 [2024-07-15 11:40:12.570243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:80968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:35.291 [2024-07-15 11:40:12.570252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:35.291 [2024-07-15 11:40:12.570263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:80976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:35.291 [2024-07-15 11:40:12.570273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:35.291 [2024-07-15 11:40:12.570284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:80984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:35.291 [2024-07-15 11:40:12.570294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:35.291 [2024-07-15 11:40:12.570305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:80760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.291 [2024-07-15 11:40:12.570315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:35.291 [2024-07-15 11:40:12.570326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:80768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.291 [2024-07-15 11:40:12.570335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:35.291 [2024-07-15 11:40:12.570347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:80776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.291 [2024-07-15 11:40:12.570356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:35.291 [2024-07-15 11:40:12.570368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:80784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.291 [2024-07-15 11:40:12.570378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:35.291 [2024-07-15 11:40:12.570389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:80792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.291 [2024-07-15 11:40:12.570399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:35.291 [2024-07-15 11:40:12.570415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:80800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.291 [2024-07-15 11:40:12.570425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:35.291 [2024-07-15 11:40:12.570437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:80808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.291 [2024-07-15 11:40:12.570447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:35.291 [2024-07-15 11:40:12.570458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:80816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.291 [2024-07-15 11:40:12.570468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:35.291 [2024-07-15 11:40:12.570479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:80824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.291 [2024-07-15 11:40:12.570488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:35.292 [2024-07-15 11:40:12.570500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:80832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.292 [2024-07-15 11:40:12.570509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:35.292 [2024-07-15 11:40:12.570520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:80840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.292 [2024-07-15 11:40:12.570530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:35.292 [2024-07-15 11:40:12.570541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:80848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.292 [2024-07-15 11:40:12.570561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:35.292 [2024-07-15 11:40:12.570578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:80856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.292 [2024-07-15 11:40:12.570588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:35.292 [2024-07-15 11:40:12.570599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:80864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.292 [2024-07-15 11:40:12.570609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:35.292 [2024-07-15 11:40:12.570620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:80872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.292 [2024-07-15 11:40:12.570630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:35.292 [2024-07-15 11:40:12.570642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:80880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.292 [2024-07-15 11:40:12.570651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:35.292 [2024-07-15 11:40:12.570663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:80888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.292 [2024-07-15 11:40:12.570673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:35.292 [2024-07-15 11:40:12.570685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:80896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.292 [2024-07-15 11:40:12.570695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:35.292 [2024-07-15 11:40:12.570706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:80904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.292 [2024-07-15 11:40:12.570716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:35.292 [2024-07-15 11:40:12.570727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:80912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.292 [2024-07-15 11:40:12.570736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:35.292 [2024-07-15 11:40:12.570748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:80920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.292 [2024-07-15 11:40:12.570758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:35.292 [2024-07-15 11:40:12.570771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:80928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.292 [2024-07-15 11:40:12.570781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:35.292 [2024-07-15 11:40:12.570792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:80992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:35.292 [2024-07-15 11:40:12.570801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:35.292 [2024-07-15 11:40:12.570813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:81000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:35.292 [2024-07-15 11:40:12.570822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:35.292 [2024-07-15 11:40:12.570833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:81008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:35.292 [2024-07-15 11:40:12.570843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:35.292 [2024-07-15 11:40:12.570854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:81016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:35.292 [2024-07-15 11:40:12.570863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:35.292 [2024-07-15 11:40:12.570874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:81024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:35.292 [2024-07-15 11:40:12.570883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:35.292 [2024-07-15 11:40:12.570895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:81032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:35.292 [2024-07-15 11:40:12.570905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:35.292 [2024-07-15 11:40:12.570918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:81040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:35.292 [2024-07-15 11:40:12.570928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:35.292 [2024-07-15 11:40:12.570940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:81048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:35.292 [2024-07-15 11:40:12.570950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:35.292 [2024-07-15 11:40:12.570961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:81056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:35.292 [2024-07-15 11:40:12.570970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:35.292 [2024-07-15 11:40:12.570982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:81064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:35.292 [2024-07-15 11:40:12.570991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:35.292 [2024-07-15 11:40:12.571003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:81072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:35.292 [2024-07-15 11:40:12.571012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:35.292 [2024-07-15 11:40:12.571024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:81080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:35.292 [2024-07-15 11:40:12.571033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:35.292 [2024-07-15 11:40:12.571044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:81088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:35.292 [2024-07-15 11:40:12.571054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:35.292 [2024-07-15 11:40:12.571065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:81096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:35.292 [2024-07-15 11:40:12.571075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:35.292 [2024-07-15 11:40:12.571086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:81104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:35.292 [2024-07-15 11:40:12.571096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:35.292 [2024-07-15 11:40:12.571109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:81112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:35.292 [2024-07-15 11:40:12.571119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:35.292 [2024-07-15 11:40:12.571130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:81120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:35.292 [2024-07-15 11:40:12.571140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:35.292 [2024-07-15 11:40:12.571151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:81128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:35.292 [2024-07-15 11:40:12.571160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:35.292 [2024-07-15 11:40:12.571172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:81136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:35.292 [2024-07-15 11:40:12.571181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:35.292 [2024-07-15 11:40:12.571193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:81144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:35.292 [2024-07-15 11:40:12.571202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:35.292 [2024-07-15 11:40:12.571214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:81152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:35.292 [2024-07-15 11:40:12.571223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:35.292 [2024-07-15 11:40:12.571234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:81160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:35.292 [2024-07-15 11:40:12.571244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:35.292 [2024-07-15 11:40:12.571256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:81168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:35.292 [2024-07-15 11:40:12.571265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:35.292 [2024-07-15 11:40:12.571277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:81176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:35.292 [2024-07-15 11:40:12.571287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:35.292 [2024-07-15 11:40:12.571298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:35.292 [2024-07-15 11:40:12.571307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:35.292 [2024-07-15 11:40:12.571319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:81192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:35.292 [2024-07-15 11:40:12.571328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:35.292 [2024-07-15 11:40:12.571339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:81200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:35.292 [2024-07-15 11:40:12.571349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:35.292 [2024-07-15 11:40:12.571360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:81208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:35.292 [2024-07-15 11:40:12.571369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:35.292 [2024-07-15 11:40:12.571380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:81216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:35.292 [2024-07-15 11:40:12.571390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:35.292 [2024-07-15 11:40:12.571401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:81224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:35.292 [2024-07-15 11:40:12.571410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:35.292 [2024-07-15 11:40:12.571422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:81232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:35.293 [2024-07-15 11:40:12.571431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:35.293 [2024-07-15 11:40:12.571444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:81240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:35.293 [2024-07-15 11:40:12.571454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:35.293 [2024-07-15 11:40:12.571465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:81248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:35.293 [2024-07-15 11:40:12.571475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:35.293 [2024-07-15 11:40:12.571506] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:35.293 [2024-07-15 11:40:12.571516] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:35.293 [2024-07-15 11:40:12.571525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81256 len:8 PRP1 0x0 PRP2 0x0 00:21:35.293 [2024-07-15 11:40:12.571534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:35.293 [2024-07-15 11:40:12.571597] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1500c30 was disconnected and freed. reset controller. 00:21:35.293 [2024-07-15 11:40:12.571841] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:35.293 [2024-07-15 11:40:12.571927] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1482240 (9): Bad file descriptor 00:21:35.293 [2024-07-15 11:40:12.572034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:35.293 [2024-07-15 11:40:12.572063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1482240 with addr=10.0.0.2, port=4420 00:21:35.293 [2024-07-15 11:40:12.572074] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1482240 is same with the state(5) to be set 00:21:35.293 [2024-07-15 11:40:12.572094] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1482240 (9): Bad file descriptor 00:21:35.293 [2024-07-15 11:40:12.572110] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:35.293 [2024-07-15 11:40:12.572120] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:35.293 [2024-07-15 11:40:12.572131] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:35.293 [2024-07-15 11:40:12.572151] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:35.293 [2024-07-15 11:40:12.572162] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:35.293 11:40:12 nvmf_tcp.nvmf_timeout -- host/timeout.sh@101 -- # sleep 3 00:21:36.225 [2024-07-15 11:40:13.572308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:36.225 [2024-07-15 11:40:13.572386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1482240 with addr=10.0.0.2, port=4420 00:21:36.225 [2024-07-15 11:40:13.572405] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1482240 is same with the state(5) to be set 00:21:36.225 [2024-07-15 11:40:13.572433] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1482240 (9): Bad file descriptor 00:21:36.225 [2024-07-15 11:40:13.572453] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:36.225 [2024-07-15 11:40:13.572464] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:36.225 [2024-07-15 11:40:13.572475] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:36.225 [2024-07-15 11:40:13.572502] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:36.225 [2024-07-15 11:40:13.572514] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:37.156 [2024-07-15 11:40:14.572677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:37.156 [2024-07-15 11:40:14.572766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1482240 with addr=10.0.0.2, port=4420 00:21:37.156 [2024-07-15 11:40:14.572796] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1482240 is same with the state(5) to be set 00:21:37.156 [2024-07-15 11:40:14.572837] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1482240 (9): Bad file descriptor 00:21:37.156 [2024-07-15 11:40:14.572866] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:37.156 [2024-07-15 11:40:14.572881] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:37.156 [2024-07-15 11:40:14.572897] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:37.156 [2024-07-15 11:40:14.572937] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:37.156 [2024-07-15 11:40:14.572956] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:38.526 [2024-07-15 11:40:15.576613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:38.526 [2024-07-15 11:40:15.576694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1482240 with addr=10.0.0.2, port=4420 00:21:38.526 [2024-07-15 11:40:15.576712] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1482240 is same with the state(5) to be set 00:21:38.526 [2024-07-15 11:40:15.576975] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1482240 (9): Bad file descriptor 00:21:38.526 [2024-07-15 11:40:15.577228] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:38.526 [2024-07-15 11:40:15.577251] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:38.526 [2024-07-15 11:40:15.577263] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:38.526 [2024-07-15 11:40:15.581232] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:38.526 [2024-07-15 11:40:15.581267] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:38.526 11:40:15 nvmf_tcp.nvmf_timeout -- host/timeout.sh@102 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:38.526 [2024-07-15 11:40:15.881162] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:38.526 11:40:15 nvmf_tcp.nvmf_timeout -- host/timeout.sh@103 -- # wait 96550 00:21:39.475 [2024-07-15 11:40:16.623340] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:21:44.734 00:21:44.734 Latency(us) 00:21:44.734 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:44.734 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:21:44.734 Verification LBA range: start 0x0 length 0x4000 00:21:44.734 NVMe0n1 : 10.01 4996.64 19.52 1876.08 0.00 18581.40 904.84 3019898.88 00:21:44.734 =================================================================================================================== 00:21:44.734 Total : 4996.64 19.52 1876.08 0.00 18581.40 0.00 3019898.88 00:21:44.734 0 00:21:44.734 11:40:21 nvmf_tcp.nvmf_timeout -- host/timeout.sh@105 -- # killprocess 96400 00:21:44.734 11:40:21 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@948 -- # '[' -z 96400 ']' 00:21:44.734 11:40:21 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@952 -- # kill -0 96400 00:21:44.734 11:40:21 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@953 -- # uname 00:21:44.734 11:40:21 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:44.734 11:40:21 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 96400 00:21:44.734 killing process with pid 96400 00:21:44.734 Received shutdown signal, test time was about 10.000000 seconds 00:21:44.734 00:21:44.734 Latency(us) 00:21:44.734 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:44.734 =================================================================================================================== 00:21:44.734 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:44.734 11:40:21 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:21:44.734 11:40:21 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:21:44.734 11:40:21 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@966 -- # echo 'killing process with pid 96400' 00:21:44.734 11:40:21 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@967 -- # kill 96400 00:21:44.734 11:40:21 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@972 -- # wait 96400 00:21:44.734 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:44.734 11:40:21 nvmf_tcp.nvmf_timeout -- host/timeout.sh@110 -- # bdevperf_pid=96667 00:21:44.735 11:40:21 nvmf_tcp.nvmf_timeout -- host/timeout.sh@109 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w randread -t 10 -f 00:21:44.735 11:40:21 nvmf_tcp.nvmf_timeout -- host/timeout.sh@112 -- # waitforlisten 96667 /var/tmp/bdevperf.sock 00:21:44.735 11:40:21 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@829 -- # '[' -z 96667 ']' 00:21:44.735 11:40:21 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:44.735 11:40:21 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:44.735 11:40:21 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:44.735 11:40:21 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:44.735 11:40:21 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:21:44.735 [2024-07-15 11:40:21.706594] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:21:44.735 [2024-07-15 11:40:21.706724] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid96667 ] 00:21:44.735 [2024-07-15 11:40:21.861691] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:44.735 [2024-07-15 11:40:21.941408] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:45.302 11:40:22 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:45.302 11:40:22 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@862 -- # return 0 00:21:45.302 11:40:22 nvmf_tcp.nvmf_timeout -- host/timeout.sh@116 -- # dtrace_pid=96695 00:21:45.302 11:40:22 nvmf_tcp.nvmf_timeout -- host/timeout.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 96667 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_timeout.bt 00:21:45.302 11:40:22 nvmf_tcp.nvmf_timeout -- host/timeout.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 -e 9 00:21:45.561 11:40:23 nvmf_tcp.nvmf_timeout -- host/timeout.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --reconnect-delay-sec 2 00:21:46.126 NVMe0n1 00:21:46.126 11:40:23 nvmf_tcp.nvmf_timeout -- host/timeout.sh@124 -- # rpc_pid=96754 00:21:46.126 11:40:23 nvmf_tcp.nvmf_timeout -- host/timeout.sh@123 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:46.126 11:40:23 nvmf_tcp.nvmf_timeout -- host/timeout.sh@125 -- # sleep 1 00:21:46.126 Running I/O for 10 seconds... 00:21:47.058 11:40:24 nvmf_tcp.nvmf_timeout -- host/timeout.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:47.626 [2024-07-15 11:40:24.844967] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8ae00 is same with the state(5) to be set 00:21:47.626 [2024-07-15 11:40:24.845032] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8ae00 is same with the state(5) to be set 00:21:47.626 [2024-07-15 11:40:24.845044] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8ae00 is same with the state(5) to be set 00:21:47.626 [2024-07-15 11:40:24.845052] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8ae00 is same with the state(5) to be set 00:21:47.626 [2024-07-15 11:40:24.845061] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8ae00 is same with the state(5) to be set 00:21:47.626 [2024-07-15 11:40:24.845069] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8ae00 is same with the state(5) to be set 00:21:47.626 [2024-07-15 11:40:24.845078] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8ae00 is same with the state(5) to be set 00:21:47.626 [2024-07-15 11:40:24.845086] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8ae00 is same with the state(5) to be set 00:21:47.626 [2024-07-15 11:40:24.845094] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8ae00 is same with the state(5) to be set 00:21:47.626 [2024-07-15 11:40:24.845102] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8ae00 is same with the state(5) to be set 00:21:47.626 [2024-07-15 11:40:24.845111] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8ae00 is same with the state(5) to be set 00:21:47.626 [2024-07-15 11:40:24.845119] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8ae00 is same with the state(5) to be set 00:21:47.626 [2024-07-15 11:40:24.845127] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8ae00 is same with the state(5) to be set 00:21:47.626 [2024-07-15 11:40:24.845135] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8ae00 is same with the state(5) to be set 00:21:47.626 [2024-07-15 11:40:24.845143] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8ae00 is same with the state(5) to be set 00:21:47.626 [2024-07-15 11:40:24.845151] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8ae00 is same with the state(5) to be set 00:21:47.626 [2024-07-15 11:40:24.845159] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8ae00 is same with the state(5) to be set 00:21:47.626 [2024-07-15 11:40:24.845167] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8ae00 is same with the state(5) to be set 00:21:47.626 [2024-07-15 11:40:24.845175] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8ae00 is same with the state(5) to be set 00:21:47.626 [2024-07-15 11:40:24.845183] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8ae00 is same with the state(5) to be set 00:21:47.626 [2024-07-15 11:40:24.845191] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8ae00 is same with the state(5) to be set 00:21:47.626 [2024-07-15 11:40:24.845200] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8ae00 is same with the state(5) to be set 00:21:47.626 [2024-07-15 11:40:24.845208] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8ae00 is same with the state(5) to be set 00:21:47.626 [2024-07-15 11:40:24.845216] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8ae00 is same with the state(5) to be set 00:21:47.627 [2024-07-15 11:40:24.845223] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8ae00 is same with the state(5) to be set 00:21:47.627 [2024-07-15 11:40:24.845231] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8ae00 is same with the state(5) to be set 00:21:47.627 [2024-07-15 11:40:24.845239] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8ae00 is same with the state(5) to be set 00:21:47.627 [2024-07-15 11:40:24.845247] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8ae00 is same with the state(5) to be set 00:21:47.627 [2024-07-15 11:40:24.845256] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8ae00 is same with the state(5) to be set 00:21:47.627 [2024-07-15 11:40:24.845263] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8ae00 is same with the state(5) to be set 00:21:47.627 [2024-07-15 11:40:24.845271] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8ae00 is same with the state(5) to be set 00:21:47.627 [2024-07-15 11:40:24.845279] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8ae00 is same with the state(5) to be set 00:21:47.627 [2024-07-15 11:40:24.845287] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8ae00 is same with the state(5) to be set 00:21:47.627 [2024-07-15 11:40:24.845296] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8ae00 is same with the state(5) to be set 00:21:47.627 [2024-07-15 11:40:24.845304] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8ae00 is same with the state(5) to be set 00:21:47.627 [2024-07-15 11:40:24.845312] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8ae00 is same with the state(5) to be set 00:21:47.627 [2024-07-15 11:40:24.845321] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8ae00 is same with the state(5) to be set 00:21:47.627 [2024-07-15 11:40:24.845329] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8ae00 is same with the state(5) to be set 00:21:47.627 [2024-07-15 11:40:24.845337] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8ae00 is same with the state(5) to be set 00:21:47.627 [2024-07-15 11:40:24.845345] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8ae00 is same with the state(5) to be set 00:21:47.627 [2024-07-15 11:40:24.845352] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8ae00 is same with the state(5) to be set 00:21:47.627 [2024-07-15 11:40:24.845360] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8ae00 is same with the state(5) to be set 00:21:47.627 [2024-07-15 11:40:24.845368] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8ae00 is same with the state(5) to be set 00:21:47.627 [2024-07-15 11:40:24.845376] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8ae00 is same with the state(5) to be set 00:21:47.627 [2024-07-15 11:40:24.845384] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8ae00 is same with the state(5) to be set 00:21:47.627 [2024-07-15 11:40:24.845391] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8ae00 is same with the state(5) to be set 00:21:47.627 [2024-07-15 11:40:24.845399] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8ae00 is same with the state(5) to be set 00:21:47.627 [2024-07-15 11:40:24.845407] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8ae00 is same with the state(5) to be set 00:21:47.627 [2024-07-15 11:40:24.845415] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8ae00 is same with the state(5) to be set 00:21:47.627 [2024-07-15 11:40:24.845423] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8ae00 is same with the state(5) to be set 00:21:47.627 [2024-07-15 11:40:24.845432] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8ae00 is same with the state(5) to be set 00:21:47.627 [2024-07-15 11:40:24.845440] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8ae00 is same with the state(5) to be set 00:21:47.627 [2024-07-15 11:40:24.845447] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8ae00 is same with the state(5) to be set 00:21:47.627 [2024-07-15 11:40:24.845457] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8ae00 is same with the state(5) to be set 00:21:47.627 [2024-07-15 11:40:24.845465] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8ae00 is same with the state(5) to be set 00:21:47.627 [2024-07-15 11:40:24.845474] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8ae00 is same with the state(5) to be set 00:21:47.627 [2024-07-15 11:40:24.845482] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8ae00 is same with the state(5) to be set 00:21:47.627 [2024-07-15 11:40:24.845489] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8ae00 is same with the state(5) to be set 00:21:47.627 [2024-07-15 11:40:24.845498] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8ae00 is same with the state(5) to be set 00:21:47.627 [2024-07-15 11:40:24.845506] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8ae00 is same with the state(5) to be set 00:21:47.627 [2024-07-15 11:40:24.845514] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8ae00 is same with the state(5) to be set 00:21:47.627 [2024-07-15 11:40:24.845522] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8ae00 is same with the state(5) to be set 00:21:47.627 [2024-07-15 11:40:24.845531] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8ae00 is same with the state(5) to be set 00:21:47.627 [2024-07-15 11:40:24.845540] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8ae00 is same with the state(5) to be set 00:21:47.627 [2024-07-15 11:40:24.845573] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8ae00 is same with the state(5) to be set 00:21:47.627 [2024-07-15 11:40:24.845588] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8ae00 is same with the state(5) to be set 00:21:47.627 [2024-07-15 11:40:24.845599] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8ae00 is same with the state(5) to be set 00:21:47.627 [2024-07-15 11:40:24.845608] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8ae00 is same with the state(5) to be set 00:21:47.627 [2024-07-15 11:40:24.845616] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8ae00 is same with the state(5) to be set 00:21:47.627 [2024-07-15 11:40:24.845624] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8ae00 is same with the state(5) to be set 00:21:47.627 [2024-07-15 11:40:24.845632] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8ae00 is same with the state(5) to be set 00:21:47.627 [2024-07-15 11:40:24.845641] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8ae00 is same with the state(5) to be set 00:21:47.627 [2024-07-15 11:40:24.845649] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8ae00 is same with the state(5) to be set 00:21:47.627 [2024-07-15 11:40:24.845657] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8ae00 is same with the state(5) to be set 00:21:47.627 [2024-07-15 11:40:24.845665] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8ae00 is same with the state(5) to be set 00:21:47.627 [2024-07-15 11:40:24.845673] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8ae00 is same with the state(5) to be set 00:21:47.627 [2024-07-15 11:40:24.845681] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8ae00 is same with the state(5) to be set 00:21:47.627 [2024-07-15 11:40:24.845689] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8ae00 is same with the state(5) to be set 00:21:47.627 [2024-07-15 11:40:24.845697] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8ae00 is same with the state(5) to be set 00:21:47.627 [2024-07-15 11:40:24.845705] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8ae00 is same with the state(5) to be set 00:21:47.627 [2024-07-15 11:40:24.845713] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8ae00 is same with the state(5) to be set 00:21:47.627 [2024-07-15 11:40:24.845721] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8ae00 is same with the state(5) to be set 00:21:47.627 [2024-07-15 11:40:24.845729] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8ae00 is same with the state(5) to be set 00:21:47.627 [2024-07-15 11:40:24.845737] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8ae00 is same with the state(5) to be set 00:21:47.627 [2024-07-15 11:40:24.845745] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8ae00 is same with the state(5) to be set 00:21:47.627 [2024-07-15 11:40:24.845753] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8ae00 is same with the state(5) to be set 00:21:47.627 [2024-07-15 11:40:24.845761] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8ae00 is same with the state(5) to be set 00:21:47.627 [2024-07-15 11:40:24.845769] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8ae00 is same with the state(5) to be set 00:21:47.627 [2024-07-15 11:40:24.845777] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8ae00 is same with the state(5) to be set 00:21:47.627 [2024-07-15 11:40:24.845785] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8ae00 is same with the state(5) to be set 00:21:47.627 [2024-07-15 11:40:24.845793] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8ae00 is same with the state(5) to be set 00:21:47.627 [2024-07-15 11:40:24.845801] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8ae00 is same with the state(5) to be set 00:21:47.627 [2024-07-15 11:40:24.845809] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8ae00 is same with the state(5) to be set 00:21:47.627 [2024-07-15 11:40:24.845817] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8ae00 is same with the state(5) to be set 00:21:47.627 [2024-07-15 11:40:24.845825] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8ae00 is same with the state(5) to be set 00:21:47.627 [2024-07-15 11:40:24.845833] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8ae00 is same with the state(5) to be set 00:21:47.627 [2024-07-15 11:40:24.846094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:48728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.627 [2024-07-15 11:40:24.846139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:47.627 [2024-07-15 11:40:24.846164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:101896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.627 [2024-07-15 11:40:24.846176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:47.627 [2024-07-15 11:40:24.846189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:25232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.627 [2024-07-15 11:40:24.846198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:47.627 [2024-07-15 11:40:24.846210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:86840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.627 [2024-07-15 11:40:24.846219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:47.627 [2024-07-15 11:40:24.846231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:27728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.627 [2024-07-15 11:40:24.846240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:47.627 [2024-07-15 11:40:24.846251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:27592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.627 [2024-07-15 11:40:24.846261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:47.627 [2024-07-15 11:40:24.846272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:104168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.627 [2024-07-15 11:40:24.846281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:47.627 [2024-07-15 11:40:24.846293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:107696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.627 [2024-07-15 11:40:24.846302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:47.627 [2024-07-15 11:40:24.846314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:119600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.627 [2024-07-15 11:40:24.846323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:47.627 [2024-07-15 11:40:24.846334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:86992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.628 [2024-07-15 11:40:24.846343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:47.628 [2024-07-15 11:40:24.846354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:103472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.628 [2024-07-15 11:40:24.846363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:47.628 [2024-07-15 11:40:24.846374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:114784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.628 [2024-07-15 11:40:24.846384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:47.628 [2024-07-15 11:40:24.846395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:62320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.628 [2024-07-15 11:40:24.846404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:47.628 [2024-07-15 11:40:24.846415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:82792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.628 [2024-07-15 11:40:24.846425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:47.628 [2024-07-15 11:40:24.846436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:85400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.628 [2024-07-15 11:40:24.846446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:47.628 [2024-07-15 11:40:24.846458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:80 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.628 [2024-07-15 11:40:24.846467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:47.628 [2024-07-15 11:40:24.846478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:79600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.628 [2024-07-15 11:40:24.846489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:47.628 [2024-07-15 11:40:24.846501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:128344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.628 [2024-07-15 11:40:24.846510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:47.628 [2024-07-15 11:40:24.846521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:124872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.628 [2024-07-15 11:40:24.846531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:47.628 [2024-07-15 11:40:24.846542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:41992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.628 [2024-07-15 11:40:24.846570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:47.628 [2024-07-15 11:40:24.846583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:41232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.628 [2024-07-15 11:40:24.846593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:47.628 [2024-07-15 11:40:24.846604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:4568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.628 [2024-07-15 11:40:24.846613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:47.628 [2024-07-15 11:40:24.846625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:92368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.628 [2024-07-15 11:40:24.846634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:47.628 [2024-07-15 11:40:24.846646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:8328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.628 [2024-07-15 11:40:24.846655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:47.628 [2024-07-15 11:40:24.846666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:81584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.628 [2024-07-15 11:40:24.846675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:47.628 [2024-07-15 11:40:24.846687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:97976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.628 [2024-07-15 11:40:24.846696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:47.628 [2024-07-15 11:40:24.846707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:75872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.628 [2024-07-15 11:40:24.846716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:47.628 [2024-07-15 11:40:24.846727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.628 [2024-07-15 11:40:24.846737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:47.628 [2024-07-15 11:40:24.846748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:30272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.628 [2024-07-15 11:40:24.846757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:47.628 [2024-07-15 11:40:24.846769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:85808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.628 [2024-07-15 11:40:24.846778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:47.628 [2024-07-15 11:40:24.846790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:130984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.628 [2024-07-15 11:40:24.846799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:47.628 [2024-07-15 11:40:24.846810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:9064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.628 [2024-07-15 11:40:24.846819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:47.628 [2024-07-15 11:40:24.846830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:30616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.628 [2024-07-15 11:40:24.846839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:47.628 [2024-07-15 11:40:24.846851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:62472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.628 [2024-07-15 11:40:24.846860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:47.628 [2024-07-15 11:40:24.846872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:22832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.628 [2024-07-15 11:40:24.846881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:47.628 [2024-07-15 11:40:24.846892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:94040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.628 [2024-07-15 11:40:24.846902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:47.628 [2024-07-15 11:40:24.846914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:44072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.628 [2024-07-15 11:40:24.846923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:47.628 [2024-07-15 11:40:24.846934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:117640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.628 [2024-07-15 11:40:24.846948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:47.628 [2024-07-15 11:40:24.846960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:98608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.628 [2024-07-15 11:40:24.846969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:47.628 [2024-07-15 11:40:24.846980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:57120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.628 [2024-07-15 11:40:24.846989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:47.628 [2024-07-15 11:40:24.847000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:75456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.628 [2024-07-15 11:40:24.847010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:47.628 [2024-07-15 11:40:24.847021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:61816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.628 [2024-07-15 11:40:24.847030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:47.628 [2024-07-15 11:40:24.847041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.628 [2024-07-15 11:40:24.847051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:47.628 [2024-07-15 11:40:24.847062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:25416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.628 [2024-07-15 11:40:24.847071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:47.628 [2024-07-15 11:40:24.847083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:111584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.628 [2024-07-15 11:40:24.847092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:47.628 [2024-07-15 11:40:24.847103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:102168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.628 [2024-07-15 11:40:24.847112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:47.628 [2024-07-15 11:40:24.847124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:46248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.628 [2024-07-15 11:40:24.847133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:47.628 [2024-07-15 11:40:24.847144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:41424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.628 [2024-07-15 11:40:24.847153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:47.628 [2024-07-15 11:40:24.847164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:10584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.628 [2024-07-15 11:40:24.847174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:47.628 [2024-07-15 11:40:24.847186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:84776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.628 [2024-07-15 11:40:24.847195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:47.628 [2024-07-15 11:40:24.847206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:28640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.628 [2024-07-15 11:40:24.847215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:47.628 [2024-07-15 11:40:24.847226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:116272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.628 [2024-07-15 11:40:24.847236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:47.629 [2024-07-15 11:40:24.847246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:22160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.629 [2024-07-15 11:40:24.847256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:47.629 [2024-07-15 11:40:24.847267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:88552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.629 [2024-07-15 11:40:24.847278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:47.629 [2024-07-15 11:40:24.847289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:121104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.629 [2024-07-15 11:40:24.847299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:47.629 [2024-07-15 11:40:24.847310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:12672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.629 [2024-07-15 11:40:24.847319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:47.629 [2024-07-15 11:40:24.847330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:128120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.629 [2024-07-15 11:40:24.847339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:47.629 [2024-07-15 11:40:24.847351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:60712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.629 [2024-07-15 11:40:24.847360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:47.629 [2024-07-15 11:40:24.847371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:127984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.629 [2024-07-15 11:40:24.847380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:47.629 [2024-07-15 11:40:24.847391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:125512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.629 [2024-07-15 11:40:24.847401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:47.629 [2024-07-15 11:40:24.847412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:71008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.629 [2024-07-15 11:40:24.847421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:47.629 [2024-07-15 11:40:24.847432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:99368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.629 [2024-07-15 11:40:24.847441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:47.629 [2024-07-15 11:40:24.847452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:99736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.629 [2024-07-15 11:40:24.847461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:47.629 [2024-07-15 11:40:24.847472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:101488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.629 [2024-07-15 11:40:24.847482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:47.629 [2024-07-15 11:40:24.847493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:54576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.629 [2024-07-15 11:40:24.847502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:47.629 [2024-07-15 11:40:24.847513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:23464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.629 [2024-07-15 11:40:24.847522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:47.629 [2024-07-15 11:40:24.847533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:99512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.629 [2024-07-15 11:40:24.847543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:47.629 [2024-07-15 11:40:24.847565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:96656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.629 [2024-07-15 11:40:24.847575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:47.629 [2024-07-15 11:40:24.847587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:125272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.629 [2024-07-15 11:40:24.847596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:47.629 [2024-07-15 11:40:24.847607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:56808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.629 [2024-07-15 11:40:24.847618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:47.629 [2024-07-15 11:40:24.847629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.629 [2024-07-15 11:40:24.847639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:47.629 [2024-07-15 11:40:24.847650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:108712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.629 [2024-07-15 11:40:24.847659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:47.629 [2024-07-15 11:40:24.847671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:102584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.629 [2024-07-15 11:40:24.847680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:47.629 [2024-07-15 11:40:24.847691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:12992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.629 [2024-07-15 11:40:24.847700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:47.629 [2024-07-15 11:40:24.847711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:81224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.629 [2024-07-15 11:40:24.847721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:47.629 [2024-07-15 11:40:24.847732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:63040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.629 [2024-07-15 11:40:24.847741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:47.629 [2024-07-15 11:40:24.847752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:107432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.629 [2024-07-15 11:40:24.847762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:47.629 [2024-07-15 11:40:24.847774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:102232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.629 [2024-07-15 11:40:24.847783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:47.629 [2024-07-15 11:40:24.847795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:52104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.629 [2024-07-15 11:40:24.847804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:47.629 [2024-07-15 11:40:24.847815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:62560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.629 [2024-07-15 11:40:24.847824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:47.629 [2024-07-15 11:40:24.847835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:9248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.629 [2024-07-15 11:40:24.847844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:47.629 [2024-07-15 11:40:24.847855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:42632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.629 [2024-07-15 11:40:24.847864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:47.629 [2024-07-15 11:40:24.847875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:110784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.629 [2024-07-15 11:40:24.847885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:47.629 [2024-07-15 11:40:24.847896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:39544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.629 [2024-07-15 11:40:24.847905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:47.629 [2024-07-15 11:40:24.847916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:68976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.629 [2024-07-15 11:40:24.847925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:47.629 [2024-07-15 11:40:24.847936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:30864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.629 [2024-07-15 11:40:24.847947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:47.629 [2024-07-15 11:40:24.847959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:68392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.629 [2024-07-15 11:40:24.847968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:47.629 [2024-07-15 11:40:24.847979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:5536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.629 [2024-07-15 11:40:24.847988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:47.629 [2024-07-15 11:40:24.848000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:115848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.629 [2024-07-15 11:40:24.848009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:47.629 [2024-07-15 11:40:24.848020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:9008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.629 [2024-07-15 11:40:24.848029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:47.629 [2024-07-15 11:40:24.848040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:109144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.629 [2024-07-15 11:40:24.848050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:47.629 [2024-07-15 11:40:24.848061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:88144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.629 [2024-07-15 11:40:24.848070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:47.629 [2024-07-15 11:40:24.848081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:10192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.629 [2024-07-15 11:40:24.848091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:47.629 [2024-07-15 11:40:24.848106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:111032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.629 [2024-07-15 11:40:24.848115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:47.629 [2024-07-15 11:40:24.848127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:8040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.629 [2024-07-15 11:40:24.848136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:47.630 [2024-07-15 11:40:24.848147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:60048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.630 [2024-07-15 11:40:24.848156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:47.630 [2024-07-15 11:40:24.848167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:16160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.630 [2024-07-15 11:40:24.848176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:47.630 [2024-07-15 11:40:24.848187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:117976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.630 [2024-07-15 11:40:24.848197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:47.630 [2024-07-15 11:40:24.848208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:85264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.630 [2024-07-15 11:40:24.848217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:47.630 [2024-07-15 11:40:24.848228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:80360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.630 [2024-07-15 11:40:24.848237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:47.630 [2024-07-15 11:40:24.848248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:125920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.630 [2024-07-15 11:40:24.848258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:47.630 [2024-07-15 11:40:24.848269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:125664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.630 [2024-07-15 11:40:24.848280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:47.630 [2024-07-15 11:40:24.848291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:22368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.630 [2024-07-15 11:40:24.848300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:47.630 [2024-07-15 11:40:24.848311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:43344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.630 [2024-07-15 11:40:24.848321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:47.630 [2024-07-15 11:40:24.848332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:39160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.630 [2024-07-15 11:40:24.848341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:47.630 [2024-07-15 11:40:24.848352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:98192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.630 [2024-07-15 11:40:24.848361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:47.630 [2024-07-15 11:40:24.848372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:87064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.630 [2024-07-15 11:40:24.848382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:47.630 [2024-07-15 11:40:24.848393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:74080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.630 [2024-07-15 11:40:24.848403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:47.630 [2024-07-15 11:40:24.848414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:27336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.630 [2024-07-15 11:40:24.848423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:47.630 [2024-07-15 11:40:24.848437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:109624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.630 [2024-07-15 11:40:24.848446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:47.630 [2024-07-15 11:40:24.848457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:113216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.630 [2024-07-15 11:40:24.848466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:47.630 [2024-07-15 11:40:24.848477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:90120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.630 [2024-07-15 11:40:24.848486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:47.630 [2024-07-15 11:40:24.848497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:69432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.630 [2024-07-15 11:40:24.848507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:47.630 [2024-07-15 11:40:24.848518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:118664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.630 [2024-07-15 11:40:24.848527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:47.630 [2024-07-15 11:40:24.848538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:4744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.630 [2024-07-15 11:40:24.848558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:47.630 [2024-07-15 11:40:24.848570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:92376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.630 [2024-07-15 11:40:24.848580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:47.630 [2024-07-15 11:40:24.848591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:28072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.630 [2024-07-15 11:40:24.848600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:47.630 [2024-07-15 11:40:24.848611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:87512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.630 [2024-07-15 11:40:24.848623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:47.630 [2024-07-15 11:40:24.848634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:91928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.630 [2024-07-15 11:40:24.848643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:47.630 [2024-07-15 11:40:24.848655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:111816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.630 [2024-07-15 11:40:24.848664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:47.630 [2024-07-15 11:40:24.848675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:22272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.630 [2024-07-15 11:40:24.848684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:47.630 [2024-07-15 11:40:24.848696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:92376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.630 [2024-07-15 11:40:24.848705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:47.630 [2024-07-15 11:40:24.848716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:21384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.630 [2024-07-15 11:40:24.848725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:47.630 [2024-07-15 11:40:24.848736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:33392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.630 [2024-07-15 11:40:24.848745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:47.630 [2024-07-15 11:40:24.848756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:21912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.630 [2024-07-15 11:40:24.848766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:47.630 [2024-07-15 11:40:24.848779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:106464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.630 [2024-07-15 11:40:24.848789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:47.630 [2024-07-15 11:40:24.848800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:95504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.630 [2024-07-15 11:40:24.848809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:47.630 [2024-07-15 11:40:24.848839] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:47.630 [2024-07-15 11:40:24.848849] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:47.630 [2024-07-15 11:40:24.848858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17864 len:8 PRP1 0x0 PRP2 0x0 00:21:47.630 [2024-07-15 11:40:24.848867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:47.630 [2024-07-15 11:40:24.848916] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x21438d0 was disconnected and freed. reset controller. 00:21:47.630 [2024-07-15 11:40:24.849209] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:47.630 [2024-07-15 11:40:24.849305] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20d6240 (9): Bad file descriptor 00:21:47.630 [2024-07-15 11:40:24.849425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:47.630 [2024-07-15 11:40:24.849449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20d6240 with addr=10.0.0.2, port=4420 00:21:47.631 [2024-07-15 11:40:24.849460] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20d6240 is same with the state(5) to be set 00:21:47.631 [2024-07-15 11:40:24.849480] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20d6240 (9): Bad file descriptor 00:21:47.631 [2024-07-15 11:40:24.849497] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:47.631 [2024-07-15 11:40:24.849506] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:47.631 [2024-07-15 11:40:24.849519] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:47.631 [2024-07-15 11:40:24.849565] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:47.631 [2024-07-15 11:40:24.849585] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:47.631 11:40:24 nvmf_tcp.nvmf_timeout -- host/timeout.sh@128 -- # wait 96754 00:21:49.531 [2024-07-15 11:40:26.849826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:49.531 [2024-07-15 11:40:26.849906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20d6240 with addr=10.0.0.2, port=4420 00:21:49.531 [2024-07-15 11:40:26.849924] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20d6240 is same with the state(5) to be set 00:21:49.531 [2024-07-15 11:40:26.849969] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20d6240 (9): Bad file descriptor 00:21:49.531 [2024-07-15 11:40:26.850016] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:49.531 [2024-07-15 11:40:26.850030] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:49.531 [2024-07-15 11:40:26.850041] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:49.531 [2024-07-15 11:40:26.850069] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:49.531 [2024-07-15 11:40:26.850081] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:51.430 [2024-07-15 11:40:28.850328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:51.430 [2024-07-15 11:40:28.850408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20d6240 with addr=10.0.0.2, port=4420 00:21:51.430 [2024-07-15 11:40:28.850426] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20d6240 is same with the state(5) to be set 00:21:51.430 [2024-07-15 11:40:28.850457] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20d6240 (9): Bad file descriptor 00:21:51.430 [2024-07-15 11:40:28.850477] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:51.430 [2024-07-15 11:40:28.850488] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:51.430 [2024-07-15 11:40:28.850499] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:51.430 [2024-07-15 11:40:28.850527] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:51.430 [2024-07-15 11:40:28.850539] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:53.957 [2024-07-15 11:40:30.850704] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:53.957 [2024-07-15 11:40:30.850808] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:53.957 [2024-07-15 11:40:30.850824] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:53.957 [2024-07-15 11:40:30.850837] nvme_ctrlr.c:1094:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] already in failed state 00:21:53.957 [2024-07-15 11:40:30.850872] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:54.524 00:21:54.524 Latency(us) 00:21:54.524 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:54.524 Job: NVMe0n1 (Core Mask 0x4, workload: randread, depth: 128, IO size: 4096) 00:21:54.524 NVMe0n1 : 8.28 2571.85 10.05 15.46 0.00 49423.08 2681.02 7015926.69 00:21:54.524 =================================================================================================================== 00:21:54.524 Total : 2571.85 10.05 15.46 0.00 49423.08 2681.02 7015926.69 00:21:54.524 0 00:21:54.524 11:40:31 nvmf_tcp.nvmf_timeout -- host/timeout.sh@129 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:21:54.524 Attaching 5 probes... 00:21:54.524 1512.407391: reset bdev controller NVMe0 00:21:54.524 1512.560439: reconnect bdev controller NVMe0 00:21:54.524 3512.871823: reconnect delay bdev controller NVMe0 00:21:54.524 3512.900048: reconnect bdev controller NVMe0 00:21:54.524 5513.364937: reconnect delay bdev controller NVMe0 00:21:54.524 5513.397107: reconnect bdev controller NVMe0 00:21:54.524 7513.855369: reconnect delay bdev controller NVMe0 00:21:54.524 7513.896248: reconnect bdev controller NVMe0 00:21:54.524 11:40:31 nvmf_tcp.nvmf_timeout -- host/timeout.sh@132 -- # grep -c 'reconnect delay bdev controller NVMe0' 00:21:54.524 11:40:31 nvmf_tcp.nvmf_timeout -- host/timeout.sh@132 -- # (( 3 <= 2 )) 00:21:54.524 11:40:31 nvmf_tcp.nvmf_timeout -- host/timeout.sh@136 -- # kill 96695 00:21:54.524 11:40:31 nvmf_tcp.nvmf_timeout -- host/timeout.sh@137 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:21:54.524 11:40:31 nvmf_tcp.nvmf_timeout -- host/timeout.sh@139 -- # killprocess 96667 00:21:54.524 11:40:31 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@948 -- # '[' -z 96667 ']' 00:21:54.524 11:40:31 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@952 -- # kill -0 96667 00:21:54.524 11:40:31 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@953 -- # uname 00:21:54.524 11:40:31 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:54.524 11:40:31 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 96667 00:21:54.524 killing process with pid 96667 00:21:54.524 Received shutdown signal, test time was about 8.329073 seconds 00:21:54.524 00:21:54.524 Latency(us) 00:21:54.524 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:54.524 =================================================================================================================== 00:21:54.524 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:54.524 11:40:31 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:21:54.524 11:40:31 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:21:54.524 11:40:31 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@966 -- # echo 'killing process with pid 96667' 00:21:54.524 11:40:31 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@967 -- # kill 96667 00:21:54.524 11:40:31 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@972 -- # wait 96667 00:21:54.783 11:40:32 nvmf_tcp.nvmf_timeout -- host/timeout.sh@141 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:55.041 11:40:32 nvmf_tcp.nvmf_timeout -- host/timeout.sh@143 -- # trap - SIGINT SIGTERM EXIT 00:21:55.041 11:40:32 nvmf_tcp.nvmf_timeout -- host/timeout.sh@145 -- # nvmftestfini 00:21:55.041 11:40:32 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:55.041 11:40:32 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@117 -- # sync 00:21:55.041 11:40:32 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:55.041 11:40:32 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@120 -- # set +e 00:21:55.041 11:40:32 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:55.041 11:40:32 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:55.041 rmmod nvme_tcp 00:21:55.041 rmmod nvme_fabrics 00:21:55.041 rmmod nvme_keyring 00:21:55.041 11:40:32 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:55.041 11:40:32 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@124 -- # set -e 00:21:55.041 11:40:32 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@125 -- # return 0 00:21:55.041 11:40:32 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@489 -- # '[' -n 96103 ']' 00:21:55.041 11:40:32 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@490 -- # killprocess 96103 00:21:55.041 11:40:32 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@948 -- # '[' -z 96103 ']' 00:21:55.041 11:40:32 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@952 -- # kill -0 96103 00:21:55.041 11:40:32 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@953 -- # uname 00:21:55.041 11:40:32 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:55.041 11:40:32 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 96103 00:21:55.041 killing process with pid 96103 00:21:55.041 11:40:32 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:21:55.041 11:40:32 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:21:55.041 11:40:32 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@966 -- # echo 'killing process with pid 96103' 00:21:55.041 11:40:32 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@967 -- # kill 96103 00:21:55.041 11:40:32 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@972 -- # wait 96103 00:21:55.300 11:40:32 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:55.300 11:40:32 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:21:55.300 11:40:32 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:21:55.300 11:40:32 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:55.300 11:40:32 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:55.300 11:40:32 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:55.300 11:40:32 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:55.300 11:40:32 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:55.300 11:40:32 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:21:55.300 00:21:55.300 real 0m47.183s 00:21:55.300 user 2m19.716s 00:21:55.300 sys 0m4.811s 00:21:55.300 11:40:32 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@1124 -- # xtrace_disable 00:21:55.300 11:40:32 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:21:55.300 ************************************ 00:21:55.300 END TEST nvmf_timeout 00:21:55.300 ************************************ 00:21:55.300 11:40:32 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:21:55.300 11:40:32 nvmf_tcp -- nvmf/nvmf.sh@121 -- # [[ virt == phy ]] 00:21:55.300 11:40:32 nvmf_tcp -- nvmf/nvmf.sh@126 -- # timing_exit host 00:21:55.300 11:40:32 nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:55.300 11:40:32 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:21:55.300 11:40:32 nvmf_tcp -- nvmf/nvmf.sh@128 -- # trap - SIGINT SIGTERM EXIT 00:21:55.300 00:21:55.300 real 15m40.153s 00:21:55.300 user 42m5.228s 00:21:55.300 sys 3m16.597s 00:21:55.300 11:40:32 nvmf_tcp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:21:55.300 11:40:32 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:21:55.300 ************************************ 00:21:55.300 END TEST nvmf_tcp 00:21:55.300 ************************************ 00:21:55.558 11:40:32 -- common/autotest_common.sh@1142 -- # return 0 00:21:55.558 11:40:32 -- spdk/autotest.sh@288 -- # [[ 0 -eq 0 ]] 00:21:55.558 11:40:32 -- spdk/autotest.sh@289 -- # run_test spdkcli_nvmf_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:21:55.558 11:40:32 -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:21:55.558 11:40:32 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:55.558 11:40:32 -- common/autotest_common.sh@10 -- # set +x 00:21:55.558 ************************************ 00:21:55.558 START TEST spdkcli_nvmf_tcp 00:21:55.558 ************************************ 00:21:55.558 11:40:32 spdkcli_nvmf_tcp -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:21:55.558 * Looking for test storage... 00:21:55.558 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:21:55.558 11:40:32 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:21:55.558 11:40:32 spdkcli_nvmf_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:21:55.558 11:40:32 spdkcli_nvmf_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:21:55.558 11:40:32 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:21:55.558 11:40:32 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:21:55.558 11:40:32 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:55.558 11:40:32 spdkcli_nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:55.558 11:40:32 spdkcli_nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:55.558 11:40:32 spdkcli_nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:55.558 11:40:32 spdkcli_nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:55.558 11:40:32 spdkcli_nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:55.558 11:40:32 spdkcli_nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:55.558 11:40:32 spdkcli_nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:55.558 11:40:32 spdkcli_nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:55.558 11:40:32 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:55.558 11:40:32 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:891080d4-f96c-4735-b9e2-e3ce9892e421 00:21:55.558 11:40:32 spdkcli_nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=891080d4-f96c-4735-b9e2-e3ce9892e421 00:21:55.558 11:40:32 spdkcli_nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:55.558 11:40:32 spdkcli_nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:55.558 11:40:32 spdkcli_nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:21:55.558 11:40:32 spdkcli_nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:55.558 11:40:32 spdkcli_nvmf_tcp -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:21:55.559 11:40:32 spdkcli_nvmf_tcp -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:55.559 11:40:32 spdkcli_nvmf_tcp -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:55.559 11:40:32 spdkcli_nvmf_tcp -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:55.559 11:40:32 spdkcli_nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:55.559 11:40:32 spdkcli_nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:55.559 11:40:32 spdkcli_nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:55.559 11:40:32 spdkcli_nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:21:55.559 11:40:32 spdkcli_nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:55.559 11:40:32 spdkcli_nvmf_tcp -- nvmf/common.sh@47 -- # : 0 00:21:55.559 11:40:32 spdkcli_nvmf_tcp -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:55.559 11:40:32 spdkcli_nvmf_tcp -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:55.559 11:40:32 spdkcli_nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:55.559 11:40:32 spdkcli_nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:55.559 11:40:32 spdkcli_nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:55.559 11:40:32 spdkcli_nvmf_tcp -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:55.559 11:40:32 spdkcli_nvmf_tcp -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:55.559 11:40:32 spdkcli_nvmf_tcp -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:55.559 11:40:32 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:21:55.559 11:40:32 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:21:55.559 11:40:32 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:21:55.559 11:40:32 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:21:55.559 11:40:32 spdkcli_nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:55.559 11:40:32 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:21:55.559 11:40:32 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:21:55.559 11:40:32 spdkcli_nvmf_tcp -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=96967 00:21:55.559 11:40:32 spdkcli_nvmf_tcp -- spdkcli/common.sh@34 -- # waitforlisten 96967 00:21:55.559 11:40:32 spdkcli_nvmf_tcp -- spdkcli/common.sh@32 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:21:55.559 11:40:32 spdkcli_nvmf_tcp -- common/autotest_common.sh@829 -- # '[' -z 96967 ']' 00:21:55.559 11:40:32 spdkcli_nvmf_tcp -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:55.559 11:40:32 spdkcli_nvmf_tcp -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:55.559 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:55.559 11:40:32 spdkcli_nvmf_tcp -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:55.559 11:40:32 spdkcli_nvmf_tcp -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:55.559 11:40:32 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:21:55.559 [2024-07-15 11:40:32.970691] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:21:55.559 [2024-07-15 11:40:32.970817] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid96967 ] 00:21:55.816 [2024-07-15 11:40:33.106801] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:21:55.816 [2024-07-15 11:40:33.169912] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:55.816 [2024-07-15 11:40:33.169920] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:56.746 11:40:33 spdkcli_nvmf_tcp -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:56.746 11:40:33 spdkcli_nvmf_tcp -- common/autotest_common.sh@862 -- # return 0 00:21:56.746 11:40:33 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:21:56.746 11:40:33 spdkcli_nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:56.746 11:40:33 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:21:56.746 11:40:33 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:21:56.746 11:40:33 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:21:56.746 11:40:33 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:21:56.746 11:40:33 spdkcli_nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:56.746 11:40:33 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:21:56.746 11:40:33 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@65 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:21:56.746 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:21:56.746 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:21:56.746 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:21:56.746 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:21:56.746 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:21:56.746 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:21:56.746 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:21:56.746 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:21:56.746 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:21:56.746 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:21:56.746 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:21:56.746 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:21:56.746 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:21:56.746 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:21:56.746 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:21:56.746 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:21:56.746 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:21:56.746 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:21:56.746 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:21:56.746 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:21:56.746 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:21:56.746 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:21:56.746 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:21:56.746 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:21:56.746 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:21:56.746 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:21:56.746 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:21:56.746 ' 00:21:59.271 [2024-07-15 11:40:36.566865] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:00.640 [2024-07-15 11:40:37.868012] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:22:03.165 [2024-07-15 11:40:40.261768] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:22:05.066 [2024-07-15 11:40:42.335331] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:22:06.966 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:22:06.966 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:22:06.966 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:22:06.966 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:22:06.966 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:22:06.966 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:22:06.966 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:22:06.966 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:22:06.966 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:22:06.966 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:22:06.966 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:22:06.966 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:22:06.966 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:22:06.966 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:22:06.966 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:22:06.966 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:22:06.966 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:22:06.966 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:22:06.966 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:22:06.966 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:22:06.966 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:22:06.966 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:22:06.966 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:22:06.966 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:22:06.966 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:22:06.966 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:22:06.966 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:22:06.966 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:22:06.966 11:40:44 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:22:06.966 11:40:44 spdkcli_nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:06.966 11:40:44 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:06.966 11:40:44 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:22:06.966 11:40:44 spdkcli_nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:06.966 11:40:44 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:06.966 11:40:44 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@69 -- # check_match 00:22:06.966 11:40:44 spdkcli_nvmf_tcp -- spdkcli/common.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/spdkcli.py ll /nvmf 00:22:07.225 11:40:44 spdkcli_nvmf_tcp -- spdkcli/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/test/app/match/match /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:22:07.225 11:40:44 spdkcli_nvmf_tcp -- spdkcli/common.sh@46 -- # rm -f /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:22:07.225 11:40:44 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:22:07.225 11:40:44 spdkcli_nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:07.225 11:40:44 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:07.225 11:40:44 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:22:07.225 11:40:44 spdkcli_nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:07.225 11:40:44 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:07.225 11:40:44 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@87 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:22:07.225 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:22:07.225 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:22:07.225 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:22:07.225 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:22:07.225 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:22:07.225 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:22:07.225 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:22:07.225 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:22:07.225 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:22:07.225 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:22:07.225 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:22:07.225 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:22:07.225 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:22:07.225 ' 00:22:12.507 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:22:12.507 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:22:12.507 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:22:12.507 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:22:12.507 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:22:12.507 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:22:12.507 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:22:12.507 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:22:12.507 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:22:12.507 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:22:12.507 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:22:12.507 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:22:12.507 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:22:12.507 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:22:12.765 11:40:50 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:22:12.765 11:40:50 spdkcli_nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:12.765 11:40:50 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:12.765 11:40:50 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@90 -- # killprocess 96967 00:22:12.765 11:40:50 spdkcli_nvmf_tcp -- common/autotest_common.sh@948 -- # '[' -z 96967 ']' 00:22:12.765 11:40:50 spdkcli_nvmf_tcp -- common/autotest_common.sh@952 -- # kill -0 96967 00:22:12.765 11:40:50 spdkcli_nvmf_tcp -- common/autotest_common.sh@953 -- # uname 00:22:12.765 11:40:50 spdkcli_nvmf_tcp -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:12.765 11:40:50 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 96967 00:22:12.765 11:40:50 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:22:12.765 killing process with pid 96967 00:22:12.765 11:40:50 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:22:12.765 11:40:50 spdkcli_nvmf_tcp -- common/autotest_common.sh@966 -- # echo 'killing process with pid 96967' 00:22:12.765 11:40:50 spdkcli_nvmf_tcp -- common/autotest_common.sh@967 -- # kill 96967 00:22:12.765 11:40:50 spdkcli_nvmf_tcp -- common/autotest_common.sh@972 -- # wait 96967 00:22:13.024 11:40:50 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@1 -- # cleanup 00:22:13.024 11:40:50 spdkcli_nvmf_tcp -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:22:13.024 11:40:50 spdkcli_nvmf_tcp -- spdkcli/common.sh@13 -- # '[' -n 96967 ']' 00:22:13.024 11:40:50 spdkcli_nvmf_tcp -- spdkcli/common.sh@14 -- # killprocess 96967 00:22:13.024 11:40:50 spdkcli_nvmf_tcp -- common/autotest_common.sh@948 -- # '[' -z 96967 ']' 00:22:13.024 11:40:50 spdkcli_nvmf_tcp -- common/autotest_common.sh@952 -- # kill -0 96967 00:22:13.024 Process with pid 96967 is not found 00:22:13.024 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 952: kill: (96967) - No such process 00:22:13.024 11:40:50 spdkcli_nvmf_tcp -- common/autotest_common.sh@975 -- # echo 'Process with pid 96967 is not found' 00:22:13.024 11:40:50 spdkcli_nvmf_tcp -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:22:13.024 11:40:50 spdkcli_nvmf_tcp -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:22:13.024 11:40:50 spdkcli_nvmf_tcp -- spdkcli/common.sh@22 -- # rm -f /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_nvmf.test /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:22:13.024 ************************************ 00:22:13.024 END TEST spdkcli_nvmf_tcp 00:22:13.024 ************************************ 00:22:13.024 00:22:13.024 real 0m17.481s 00:22:13.024 user 0m37.994s 00:22:13.024 sys 0m0.870s 00:22:13.024 11:40:50 spdkcli_nvmf_tcp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:22:13.024 11:40:50 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:13.024 11:40:50 -- common/autotest_common.sh@1142 -- # return 0 00:22:13.024 11:40:50 -- spdk/autotest.sh@290 -- # run_test nvmf_identify_passthru /home/vagrant/spdk_repo/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:22:13.024 11:40:50 -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:22:13.024 11:40:50 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:13.024 11:40:50 -- common/autotest_common.sh@10 -- # set +x 00:22:13.024 ************************************ 00:22:13.024 START TEST nvmf_identify_passthru 00:22:13.024 ************************************ 00:22:13.024 11:40:50 nvmf_identify_passthru -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:22:13.024 * Looking for test storage... 00:22:13.024 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:22:13.024 11:40:50 nvmf_identify_passthru -- target/identify_passthru.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:22:13.024 11:40:50 nvmf_identify_passthru -- nvmf/common.sh@7 -- # uname -s 00:22:13.024 11:40:50 nvmf_identify_passthru -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:13.024 11:40:50 nvmf_identify_passthru -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:13.024 11:40:50 nvmf_identify_passthru -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:13.024 11:40:50 nvmf_identify_passthru -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:13.024 11:40:50 nvmf_identify_passthru -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:13.024 11:40:50 nvmf_identify_passthru -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:13.024 11:40:50 nvmf_identify_passthru -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:13.024 11:40:50 nvmf_identify_passthru -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:13.024 11:40:50 nvmf_identify_passthru -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:13.024 11:40:50 nvmf_identify_passthru -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:13.024 11:40:50 nvmf_identify_passthru -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:891080d4-f96c-4735-b9e2-e3ce9892e421 00:22:13.024 11:40:50 nvmf_identify_passthru -- nvmf/common.sh@18 -- # NVME_HOSTID=891080d4-f96c-4735-b9e2-e3ce9892e421 00:22:13.024 11:40:50 nvmf_identify_passthru -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:13.024 11:40:50 nvmf_identify_passthru -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:13.024 11:40:50 nvmf_identify_passthru -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:22:13.024 11:40:50 nvmf_identify_passthru -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:13.024 11:40:50 nvmf_identify_passthru -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:22:13.024 11:40:50 nvmf_identify_passthru -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:13.024 11:40:50 nvmf_identify_passthru -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:13.024 11:40:50 nvmf_identify_passthru -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:13.025 11:40:50 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:13.025 11:40:50 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:13.025 11:40:50 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:13.025 11:40:50 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:22:13.025 11:40:50 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:13.025 11:40:50 nvmf_identify_passthru -- nvmf/common.sh@47 -- # : 0 00:22:13.025 11:40:50 nvmf_identify_passthru -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:13.025 11:40:50 nvmf_identify_passthru -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:13.025 11:40:50 nvmf_identify_passthru -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:13.025 11:40:50 nvmf_identify_passthru -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:13.025 11:40:50 nvmf_identify_passthru -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:13.025 11:40:50 nvmf_identify_passthru -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:13.025 11:40:50 nvmf_identify_passthru -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:13.025 11:40:50 nvmf_identify_passthru -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:13.025 11:40:50 nvmf_identify_passthru -- target/identify_passthru.sh@10 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:22:13.025 11:40:50 nvmf_identify_passthru -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:13.025 11:40:50 nvmf_identify_passthru -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:13.025 11:40:50 nvmf_identify_passthru -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:13.025 11:40:50 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:13.025 11:40:50 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:13.025 11:40:50 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:13.025 11:40:50 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:22:13.025 11:40:50 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:13.025 11:40:50 nvmf_identify_passthru -- target/identify_passthru.sh@12 -- # nvmftestinit 00:22:13.025 11:40:50 nvmf_identify_passthru -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:13.025 11:40:50 nvmf_identify_passthru -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:13.025 11:40:50 nvmf_identify_passthru -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:13.025 11:40:50 nvmf_identify_passthru -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:13.025 11:40:50 nvmf_identify_passthru -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:13.025 11:40:50 nvmf_identify_passthru -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:13.025 11:40:50 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:22:13.025 11:40:50 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:13.025 11:40:50 nvmf_identify_passthru -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:22:13.025 11:40:50 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:22:13.025 11:40:50 nvmf_identify_passthru -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:22:13.025 11:40:50 nvmf_identify_passthru -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:22:13.025 11:40:50 nvmf_identify_passthru -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:22:13.025 11:40:50 nvmf_identify_passthru -- nvmf/common.sh@432 -- # nvmf_veth_init 00:22:13.025 11:40:50 nvmf_identify_passthru -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:13.025 11:40:50 nvmf_identify_passthru -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:13.025 11:40:50 nvmf_identify_passthru -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:22:13.025 11:40:50 nvmf_identify_passthru -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:22:13.025 11:40:50 nvmf_identify_passthru -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:22:13.025 11:40:50 nvmf_identify_passthru -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:22:13.025 11:40:50 nvmf_identify_passthru -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:22:13.025 11:40:50 nvmf_identify_passthru -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:13.025 11:40:50 nvmf_identify_passthru -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:22:13.025 11:40:50 nvmf_identify_passthru -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:22:13.025 11:40:50 nvmf_identify_passthru -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:22:13.025 11:40:50 nvmf_identify_passthru -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:22:13.025 11:40:50 nvmf_identify_passthru -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:22:13.025 11:40:50 nvmf_identify_passthru -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:22:13.025 Cannot find device "nvmf_tgt_br" 00:22:13.025 11:40:50 nvmf_identify_passthru -- nvmf/common.sh@155 -- # true 00:22:13.025 11:40:50 nvmf_identify_passthru -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:22:13.025 Cannot find device "nvmf_tgt_br2" 00:22:13.025 11:40:50 nvmf_identify_passthru -- nvmf/common.sh@156 -- # true 00:22:13.025 11:40:50 nvmf_identify_passthru -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:22:13.025 11:40:50 nvmf_identify_passthru -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:22:13.025 Cannot find device "nvmf_tgt_br" 00:22:13.025 11:40:50 nvmf_identify_passthru -- nvmf/common.sh@158 -- # true 00:22:13.025 11:40:50 nvmf_identify_passthru -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:22:13.025 Cannot find device "nvmf_tgt_br2" 00:22:13.025 11:40:50 nvmf_identify_passthru -- nvmf/common.sh@159 -- # true 00:22:13.025 11:40:50 nvmf_identify_passthru -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:22:13.284 11:40:50 nvmf_identify_passthru -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:22:13.284 11:40:50 nvmf_identify_passthru -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:22:13.284 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:13.284 11:40:50 nvmf_identify_passthru -- nvmf/common.sh@162 -- # true 00:22:13.284 11:40:50 nvmf_identify_passthru -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:22:13.284 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:13.284 11:40:50 nvmf_identify_passthru -- nvmf/common.sh@163 -- # true 00:22:13.284 11:40:50 nvmf_identify_passthru -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:22:13.284 11:40:50 nvmf_identify_passthru -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:22:13.284 11:40:50 nvmf_identify_passthru -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:22:13.284 11:40:50 nvmf_identify_passthru -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:22:13.284 11:40:50 nvmf_identify_passthru -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:22:13.284 11:40:50 nvmf_identify_passthru -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:22:13.284 11:40:50 nvmf_identify_passthru -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:22:13.284 11:40:50 nvmf_identify_passthru -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:22:13.284 11:40:50 nvmf_identify_passthru -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:22:13.284 11:40:50 nvmf_identify_passthru -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:22:13.284 11:40:50 nvmf_identify_passthru -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:22:13.284 11:40:50 nvmf_identify_passthru -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:22:13.284 11:40:50 nvmf_identify_passthru -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:22:13.284 11:40:50 nvmf_identify_passthru -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:22:13.284 11:40:50 nvmf_identify_passthru -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:22:13.284 11:40:50 nvmf_identify_passthru -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:22:13.284 11:40:50 nvmf_identify_passthru -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:22:13.284 11:40:50 nvmf_identify_passthru -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:22:13.284 11:40:50 nvmf_identify_passthru -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:22:13.284 11:40:50 nvmf_identify_passthru -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:22:13.542 11:40:50 nvmf_identify_passthru -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:22:13.542 11:40:50 nvmf_identify_passthru -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:22:13.542 11:40:50 nvmf_identify_passthru -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:22:13.542 11:40:50 nvmf_identify_passthru -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:22:13.542 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:13.542 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.101 ms 00:22:13.542 00:22:13.542 --- 10.0.0.2 ping statistics --- 00:22:13.542 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:13.542 rtt min/avg/max/mdev = 0.101/0.101/0.101/0.000 ms 00:22:13.542 11:40:50 nvmf_identify_passthru -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:22:13.542 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:22:13.542 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.041 ms 00:22:13.542 00:22:13.542 --- 10.0.0.3 ping statistics --- 00:22:13.542 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:13.542 rtt min/avg/max/mdev = 0.041/0.041/0.041/0.000 ms 00:22:13.542 11:40:50 nvmf_identify_passthru -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:22:13.542 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:13.542 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.032 ms 00:22:13.542 00:22:13.542 --- 10.0.0.1 ping statistics --- 00:22:13.542 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:13.542 rtt min/avg/max/mdev = 0.032/0.032/0.032/0.000 ms 00:22:13.542 11:40:50 nvmf_identify_passthru -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:13.542 11:40:50 nvmf_identify_passthru -- nvmf/common.sh@433 -- # return 0 00:22:13.542 11:40:50 nvmf_identify_passthru -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:13.542 11:40:50 nvmf_identify_passthru -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:13.542 11:40:50 nvmf_identify_passthru -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:13.542 11:40:50 nvmf_identify_passthru -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:13.542 11:40:50 nvmf_identify_passthru -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:13.542 11:40:50 nvmf_identify_passthru -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:13.542 11:40:50 nvmf_identify_passthru -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:13.542 11:40:50 nvmf_identify_passthru -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:22:13.542 11:40:50 nvmf_identify_passthru -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:13.543 11:40:50 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:22:13.543 11:40:50 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:22:13.543 11:40:50 nvmf_identify_passthru -- common/autotest_common.sh@1524 -- # bdfs=() 00:22:13.543 11:40:50 nvmf_identify_passthru -- common/autotest_common.sh@1524 -- # local bdfs 00:22:13.543 11:40:50 nvmf_identify_passthru -- common/autotest_common.sh@1525 -- # bdfs=($(get_nvme_bdfs)) 00:22:13.543 11:40:50 nvmf_identify_passthru -- common/autotest_common.sh@1525 -- # get_nvme_bdfs 00:22:13.543 11:40:50 nvmf_identify_passthru -- common/autotest_common.sh@1513 -- # bdfs=() 00:22:13.543 11:40:50 nvmf_identify_passthru -- common/autotest_common.sh@1513 -- # local bdfs 00:22:13.543 11:40:50 nvmf_identify_passthru -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:22:13.543 11:40:50 nvmf_identify_passthru -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:22:13.543 11:40:50 nvmf_identify_passthru -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:22:13.543 11:40:50 nvmf_identify_passthru -- common/autotest_common.sh@1515 -- # (( 2 == 0 )) 00:22:13.543 11:40:50 nvmf_identify_passthru -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:22:13.543 11:40:50 nvmf_identify_passthru -- common/autotest_common.sh@1527 -- # echo 0000:00:10.0 00:22:13.543 11:40:50 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # bdf=0000:00:10.0 00:22:13.543 11:40:50 nvmf_identify_passthru -- target/identify_passthru.sh@17 -- # '[' -z 0000:00:10.0 ']' 00:22:13.543 11:40:50 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' -i 0 00:22:13.543 11:40:50 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:22:13.543 11:40:50 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:22:13.801 11:40:51 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # nvme_serial_number=12340 00:22:13.801 11:40:51 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' -i 0 00:22:13.801 11:40:51 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:22:13.801 11:40:51 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:22:13.801 11:40:51 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # nvme_model_number=QEMU 00:22:13.801 11:40:51 nvmf_identify_passthru -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:22:13.801 11:40:51 nvmf_identify_passthru -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:13.801 11:40:51 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:22:14.059 11:40:51 nvmf_identify_passthru -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:22:14.059 11:40:51 nvmf_identify_passthru -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:14.059 11:40:51 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:22:14.059 11:40:51 nvmf_identify_passthru -- target/identify_passthru.sh@31 -- # nvmfpid=97462 00:22:14.059 11:40:51 nvmf_identify_passthru -- target/identify_passthru.sh@30 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:22:14.059 11:40:51 nvmf_identify_passthru -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:14.059 11:40:51 nvmf_identify_passthru -- target/identify_passthru.sh@35 -- # waitforlisten 97462 00:22:14.059 11:40:51 nvmf_identify_passthru -- common/autotest_common.sh@829 -- # '[' -z 97462 ']' 00:22:14.059 11:40:51 nvmf_identify_passthru -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:14.059 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:14.059 11:40:51 nvmf_identify_passthru -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:14.059 11:40:51 nvmf_identify_passthru -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:14.059 11:40:51 nvmf_identify_passthru -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:14.059 11:40:51 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:22:14.059 [2024-07-15 11:40:51.335952] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:22:14.059 [2024-07-15 11:40:51.336049] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:14.059 [2024-07-15 11:40:51.469182] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:14.317 [2024-07-15 11:40:51.558385] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:14.317 [2024-07-15 11:40:51.558756] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:14.317 [2024-07-15 11:40:51.558790] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:14.317 [2024-07-15 11:40:51.558805] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:14.317 [2024-07-15 11:40:51.558817] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:14.317 [2024-07-15 11:40:51.558925] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:14.317 [2024-07-15 11:40:51.558998] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:22:14.317 [2024-07-15 11:40:51.559565] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:22:14.317 [2024-07-15 11:40:51.559582] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:14.883 11:40:52 nvmf_identify_passthru -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:14.883 11:40:52 nvmf_identify_passthru -- common/autotest_common.sh@862 -- # return 0 00:22:14.883 11:40:52 nvmf_identify_passthru -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:22:14.883 11:40:52 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:14.883 11:40:52 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:22:15.141 11:40:52 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:15.141 11:40:52 nvmf_identify_passthru -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:22:15.141 11:40:52 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:15.141 11:40:52 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:22:15.141 [2024-07-15 11:40:52.410508] nvmf_tgt.c: 451:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:22:15.141 11:40:52 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:15.141 11:40:52 nvmf_identify_passthru -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:15.141 11:40:52 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:15.141 11:40:52 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:22:15.141 [2024-07-15 11:40:52.424148] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:15.141 11:40:52 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:15.141 11:40:52 nvmf_identify_passthru -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:22:15.141 11:40:52 nvmf_identify_passthru -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:15.141 11:40:52 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:22:15.141 11:40:52 nvmf_identify_passthru -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:00:10.0 00:22:15.141 11:40:52 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:15.141 11:40:52 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:22:15.141 Nvme0n1 00:22:15.141 11:40:52 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:15.141 11:40:52 nvmf_identify_passthru -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:22:15.141 11:40:52 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:15.141 11:40:52 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:22:15.142 11:40:52 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:15.142 11:40:52 nvmf_identify_passthru -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:22:15.142 11:40:52 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:15.142 11:40:52 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:22:15.142 11:40:52 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:15.142 11:40:52 nvmf_identify_passthru -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:15.142 11:40:52 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:15.142 11:40:52 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:22:15.142 [2024-07-15 11:40:52.564586] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:15.142 11:40:52 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:15.142 11:40:52 nvmf_identify_passthru -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:22:15.142 11:40:52 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:15.142 11:40:52 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:22:15.142 [ 00:22:15.142 { 00:22:15.142 "allow_any_host": true, 00:22:15.142 "hosts": [], 00:22:15.142 "listen_addresses": [], 00:22:15.142 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:22:15.142 "subtype": "Discovery" 00:22:15.142 }, 00:22:15.142 { 00:22:15.142 "allow_any_host": true, 00:22:15.142 "hosts": [], 00:22:15.142 "listen_addresses": [ 00:22:15.142 { 00:22:15.142 "adrfam": "IPv4", 00:22:15.142 "traddr": "10.0.0.2", 00:22:15.142 "trsvcid": "4420", 00:22:15.142 "trtype": "TCP" 00:22:15.142 } 00:22:15.142 ], 00:22:15.142 "max_cntlid": 65519, 00:22:15.142 "max_namespaces": 1, 00:22:15.142 "min_cntlid": 1, 00:22:15.142 "model_number": "SPDK bdev Controller", 00:22:15.142 "namespaces": [ 00:22:15.142 { 00:22:15.142 "bdev_name": "Nvme0n1", 00:22:15.142 "name": "Nvme0n1", 00:22:15.142 "nguid": "FB5130C0FFBF401CAF643B24A9A0BC50", 00:22:15.142 "nsid": 1, 00:22:15.142 "uuid": "fb5130c0-ffbf-401c-af64-3b24a9a0bc50" 00:22:15.142 } 00:22:15.142 ], 00:22:15.142 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:15.142 "serial_number": "SPDK00000000000001", 00:22:15.142 "subtype": "NVMe" 00:22:15.142 } 00:22:15.142 ] 00:22:15.142 11:40:52 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:15.142 11:40:52 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:22:15.142 11:40:52 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:22:15.142 11:40:52 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:22:15.400 11:40:52 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # nvmf_serial_number=12340 00:22:15.400 11:40:52 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:22:15.400 11:40:52 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:22:15.400 11:40:52 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:22:15.658 11:40:53 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # nvmf_model_number=QEMU 00:22:15.658 11:40:53 nvmf_identify_passthru -- target/identify_passthru.sh@63 -- # '[' 12340 '!=' 12340 ']' 00:22:15.658 11:40:53 nvmf_identify_passthru -- target/identify_passthru.sh@68 -- # '[' QEMU '!=' QEMU ']' 00:22:15.658 11:40:53 nvmf_identify_passthru -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:15.658 11:40:53 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:15.658 11:40:53 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:22:15.658 11:40:53 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:15.658 11:40:53 nvmf_identify_passthru -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:22:15.658 11:40:53 nvmf_identify_passthru -- target/identify_passthru.sh@77 -- # nvmftestfini 00:22:15.658 11:40:53 nvmf_identify_passthru -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:15.658 11:40:53 nvmf_identify_passthru -- nvmf/common.sh@117 -- # sync 00:22:15.658 11:40:53 nvmf_identify_passthru -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:15.658 11:40:53 nvmf_identify_passthru -- nvmf/common.sh@120 -- # set +e 00:22:15.658 11:40:53 nvmf_identify_passthru -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:15.658 11:40:53 nvmf_identify_passthru -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:15.658 rmmod nvme_tcp 00:22:15.658 rmmod nvme_fabrics 00:22:15.658 rmmod nvme_keyring 00:22:15.936 11:40:53 nvmf_identify_passthru -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:15.936 11:40:53 nvmf_identify_passthru -- nvmf/common.sh@124 -- # set -e 00:22:15.936 11:40:53 nvmf_identify_passthru -- nvmf/common.sh@125 -- # return 0 00:22:15.936 11:40:53 nvmf_identify_passthru -- nvmf/common.sh@489 -- # '[' -n 97462 ']' 00:22:15.936 11:40:53 nvmf_identify_passthru -- nvmf/common.sh@490 -- # killprocess 97462 00:22:15.936 11:40:53 nvmf_identify_passthru -- common/autotest_common.sh@948 -- # '[' -z 97462 ']' 00:22:15.936 11:40:53 nvmf_identify_passthru -- common/autotest_common.sh@952 -- # kill -0 97462 00:22:15.936 11:40:53 nvmf_identify_passthru -- common/autotest_common.sh@953 -- # uname 00:22:15.936 11:40:53 nvmf_identify_passthru -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:15.936 11:40:53 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 97462 00:22:15.936 11:40:53 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:22:15.936 11:40:53 nvmf_identify_passthru -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:22:15.936 11:40:53 nvmf_identify_passthru -- common/autotest_common.sh@966 -- # echo 'killing process with pid 97462' 00:22:15.936 killing process with pid 97462 00:22:15.936 11:40:53 nvmf_identify_passthru -- common/autotest_common.sh@967 -- # kill 97462 00:22:15.936 11:40:53 nvmf_identify_passthru -- common/autotest_common.sh@972 -- # wait 97462 00:22:15.936 11:40:53 nvmf_identify_passthru -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:15.936 11:40:53 nvmf_identify_passthru -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:15.936 11:40:53 nvmf_identify_passthru -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:15.936 11:40:53 nvmf_identify_passthru -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:15.936 11:40:53 nvmf_identify_passthru -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:15.936 11:40:53 nvmf_identify_passthru -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:15.936 11:40:53 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:22:15.936 11:40:53 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:15.936 11:40:53 nvmf_identify_passthru -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:22:15.936 00:22:15.936 real 0m3.047s 00:22:15.936 user 0m7.584s 00:22:15.936 sys 0m0.724s 00:22:15.936 11:40:53 nvmf_identify_passthru -- common/autotest_common.sh@1124 -- # xtrace_disable 00:22:15.936 11:40:53 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:22:15.936 ************************************ 00:22:15.936 END TEST nvmf_identify_passthru 00:22:15.936 ************************************ 00:22:16.213 11:40:53 -- common/autotest_common.sh@1142 -- # return 0 00:22:16.213 11:40:53 -- spdk/autotest.sh@292 -- # run_test nvmf_dif /home/vagrant/spdk_repo/spdk/test/nvmf/target/dif.sh 00:22:16.213 11:40:53 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:22:16.213 11:40:53 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:16.213 11:40:53 -- common/autotest_common.sh@10 -- # set +x 00:22:16.213 ************************************ 00:22:16.213 START TEST nvmf_dif 00:22:16.213 ************************************ 00:22:16.213 11:40:53 nvmf_dif -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/dif.sh 00:22:16.213 * Looking for test storage... 00:22:16.213 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:22:16.213 11:40:53 nvmf_dif -- target/dif.sh@13 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:22:16.213 11:40:53 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:22:16.213 11:40:53 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:16.213 11:40:53 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:16.213 11:40:53 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:16.213 11:40:53 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:16.213 11:40:53 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:16.213 11:40:53 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:16.213 11:40:53 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:16.213 11:40:53 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:16.213 11:40:53 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:16.213 11:40:53 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:16.213 11:40:53 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:891080d4-f96c-4735-b9e2-e3ce9892e421 00:22:16.213 11:40:53 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=891080d4-f96c-4735-b9e2-e3ce9892e421 00:22:16.213 11:40:53 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:16.213 11:40:53 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:16.213 11:40:53 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:22:16.213 11:40:53 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:16.213 11:40:53 nvmf_dif -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:22:16.213 11:40:53 nvmf_dif -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:16.213 11:40:53 nvmf_dif -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:16.213 11:40:53 nvmf_dif -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:16.213 11:40:53 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:16.213 11:40:53 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:16.213 11:40:53 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:16.213 11:40:53 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:22:16.213 11:40:53 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:16.213 11:40:53 nvmf_dif -- nvmf/common.sh@47 -- # : 0 00:22:16.213 11:40:53 nvmf_dif -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:16.213 11:40:53 nvmf_dif -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:16.213 11:40:53 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:16.213 11:40:53 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:16.213 11:40:53 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:16.213 11:40:53 nvmf_dif -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:16.213 11:40:53 nvmf_dif -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:16.213 11:40:53 nvmf_dif -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:16.213 11:40:53 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:22:16.213 11:40:53 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:22:16.213 11:40:53 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:22:16.213 11:40:53 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:22:16.213 11:40:53 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:22:16.213 11:40:53 nvmf_dif -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:16.213 11:40:53 nvmf_dif -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:16.213 11:40:53 nvmf_dif -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:16.213 11:40:53 nvmf_dif -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:16.213 11:40:53 nvmf_dif -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:16.213 11:40:53 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:16.213 11:40:53 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:22:16.213 11:40:53 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:16.213 11:40:53 nvmf_dif -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:22:16.213 11:40:53 nvmf_dif -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:22:16.213 11:40:53 nvmf_dif -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:22:16.213 11:40:53 nvmf_dif -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:22:16.213 11:40:53 nvmf_dif -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:22:16.213 11:40:53 nvmf_dif -- nvmf/common.sh@432 -- # nvmf_veth_init 00:22:16.213 11:40:53 nvmf_dif -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:16.213 11:40:53 nvmf_dif -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:16.213 11:40:53 nvmf_dif -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:22:16.213 11:40:53 nvmf_dif -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:22:16.213 11:40:53 nvmf_dif -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:22:16.213 11:40:53 nvmf_dif -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:22:16.213 11:40:53 nvmf_dif -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:22:16.213 11:40:53 nvmf_dif -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:16.213 11:40:53 nvmf_dif -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:22:16.213 11:40:53 nvmf_dif -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:22:16.213 11:40:53 nvmf_dif -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:22:16.213 11:40:53 nvmf_dif -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:22:16.213 11:40:53 nvmf_dif -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:22:16.213 11:40:53 nvmf_dif -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:22:16.213 Cannot find device "nvmf_tgt_br" 00:22:16.213 11:40:53 nvmf_dif -- nvmf/common.sh@155 -- # true 00:22:16.213 11:40:53 nvmf_dif -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:22:16.213 Cannot find device "nvmf_tgt_br2" 00:22:16.213 11:40:53 nvmf_dif -- nvmf/common.sh@156 -- # true 00:22:16.213 11:40:53 nvmf_dif -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:22:16.213 11:40:53 nvmf_dif -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:22:16.213 Cannot find device "nvmf_tgt_br" 00:22:16.213 11:40:53 nvmf_dif -- nvmf/common.sh@158 -- # true 00:22:16.213 11:40:53 nvmf_dif -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:22:16.213 Cannot find device "nvmf_tgt_br2" 00:22:16.213 11:40:53 nvmf_dif -- nvmf/common.sh@159 -- # true 00:22:16.213 11:40:53 nvmf_dif -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:22:16.213 11:40:53 nvmf_dif -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:22:16.213 11:40:53 nvmf_dif -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:22:16.213 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:16.213 11:40:53 nvmf_dif -- nvmf/common.sh@162 -- # true 00:22:16.213 11:40:53 nvmf_dif -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:22:16.213 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:16.213 11:40:53 nvmf_dif -- nvmf/common.sh@163 -- # true 00:22:16.213 11:40:53 nvmf_dif -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:22:16.213 11:40:53 nvmf_dif -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:22:16.213 11:40:53 nvmf_dif -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:22:16.213 11:40:53 nvmf_dif -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:22:16.472 11:40:53 nvmf_dif -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:22:16.472 11:40:53 nvmf_dif -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:22:16.472 11:40:53 nvmf_dif -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:22:16.472 11:40:53 nvmf_dif -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:22:16.472 11:40:53 nvmf_dif -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:22:16.472 11:40:53 nvmf_dif -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:22:16.472 11:40:53 nvmf_dif -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:22:16.472 11:40:53 nvmf_dif -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:22:16.472 11:40:53 nvmf_dif -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:22:16.472 11:40:53 nvmf_dif -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:22:16.472 11:40:53 nvmf_dif -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:22:16.472 11:40:53 nvmf_dif -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:22:16.472 11:40:53 nvmf_dif -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:22:16.472 11:40:53 nvmf_dif -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:22:16.472 11:40:53 nvmf_dif -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:22:16.472 11:40:53 nvmf_dif -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:22:16.472 11:40:53 nvmf_dif -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:22:16.472 11:40:53 nvmf_dif -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:22:16.472 11:40:53 nvmf_dif -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:22:16.472 11:40:53 nvmf_dif -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:22:16.472 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:16.472 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.096 ms 00:22:16.472 00:22:16.472 --- 10.0.0.2 ping statistics --- 00:22:16.472 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:16.472 rtt min/avg/max/mdev = 0.096/0.096/0.096/0.000 ms 00:22:16.472 11:40:53 nvmf_dif -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:22:16.472 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:22:16.472 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.072 ms 00:22:16.472 00:22:16.472 --- 10.0.0.3 ping statistics --- 00:22:16.472 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:16.472 rtt min/avg/max/mdev = 0.072/0.072/0.072/0.000 ms 00:22:16.472 11:40:53 nvmf_dif -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:22:16.472 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:16.472 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.039 ms 00:22:16.472 00:22:16.472 --- 10.0.0.1 ping statistics --- 00:22:16.472 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:16.472 rtt min/avg/max/mdev = 0.039/0.039/0.039/0.000 ms 00:22:16.472 11:40:53 nvmf_dif -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:16.472 11:40:53 nvmf_dif -- nvmf/common.sh@433 -- # return 0 00:22:16.472 11:40:53 nvmf_dif -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:22:16.472 11:40:53 nvmf_dif -- nvmf/common.sh@451 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:22:16.730 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:22:16.730 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:22:16.730 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:22:16.988 11:40:54 nvmf_dif -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:16.988 11:40:54 nvmf_dif -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:16.988 11:40:54 nvmf_dif -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:16.988 11:40:54 nvmf_dif -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:16.988 11:40:54 nvmf_dif -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:16.988 11:40:54 nvmf_dif -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:16.988 11:40:54 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:22:16.988 11:40:54 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:22:16.988 11:40:54 nvmf_dif -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:16.988 11:40:54 nvmf_dif -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:16.988 11:40:54 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:22:16.988 11:40:54 nvmf_dif -- nvmf/common.sh@481 -- # nvmfpid=97806 00:22:16.988 11:40:54 nvmf_dif -- nvmf/common.sh@482 -- # waitforlisten 97806 00:22:16.988 11:40:54 nvmf_dif -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:22:16.988 11:40:54 nvmf_dif -- common/autotest_common.sh@829 -- # '[' -z 97806 ']' 00:22:16.988 11:40:54 nvmf_dif -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:16.988 11:40:54 nvmf_dif -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:16.988 11:40:54 nvmf_dif -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:16.988 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:16.988 11:40:54 nvmf_dif -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:16.988 11:40:54 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:22:16.988 [2024-07-15 11:40:54.290531] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:22:16.988 [2024-07-15 11:40:54.290652] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:16.988 [2024-07-15 11:40:54.427307] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:17.247 [2024-07-15 11:40:54.487162] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:17.247 [2024-07-15 11:40:54.487215] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:17.247 [2024-07-15 11:40:54.487227] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:17.247 [2024-07-15 11:40:54.487235] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:17.247 [2024-07-15 11:40:54.487243] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:17.247 [2024-07-15 11:40:54.487268] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:17.247 11:40:54 nvmf_dif -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:17.247 11:40:54 nvmf_dif -- common/autotest_common.sh@862 -- # return 0 00:22:17.247 11:40:54 nvmf_dif -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:17.247 11:40:54 nvmf_dif -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:17.248 11:40:54 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:22:17.248 11:40:54 nvmf_dif -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:17.248 11:40:54 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:22:17.248 11:40:54 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:22:17.248 11:40:54 nvmf_dif -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:17.248 11:40:54 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:22:17.248 [2024-07-15 11:40:54.601895] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:17.248 11:40:54 nvmf_dif -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:17.248 11:40:54 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:22:17.248 11:40:54 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:22:17.248 11:40:54 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:17.248 11:40:54 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:22:17.248 ************************************ 00:22:17.248 START TEST fio_dif_1_default 00:22:17.248 ************************************ 00:22:17.248 11:40:54 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1123 -- # fio_dif_1 00:22:17.248 11:40:54 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:22:17.248 11:40:54 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:22:17.248 11:40:54 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:22:17.248 11:40:54 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:22:17.248 11:40:54 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:22:17.248 11:40:54 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:22:17.248 11:40:54 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:17.248 11:40:54 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:22:17.248 bdev_null0 00:22:17.248 11:40:54 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:17.248 11:40:54 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:22:17.248 11:40:54 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:17.248 11:40:54 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:22:17.248 11:40:54 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:17.248 11:40:54 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:22:17.248 11:40:54 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:17.248 11:40:54 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:22:17.248 11:40:54 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:17.248 11:40:54 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:22:17.248 11:40:54 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:17.248 11:40:54 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:22:17.248 [2024-07-15 11:40:54.645977] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:17.248 11:40:54 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:17.248 11:40:54 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:22:17.248 11:40:54 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:22:17.248 11:40:54 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:22:17.248 11:40:54 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # config=() 00:22:17.248 11:40:54 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # local subsystem config 00:22:17.248 11:40:54 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:17.248 11:40:54 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:17.248 { 00:22:17.248 "params": { 00:22:17.248 "name": "Nvme$subsystem", 00:22:17.248 "trtype": "$TEST_TRANSPORT", 00:22:17.248 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:17.248 "adrfam": "ipv4", 00:22:17.248 "trsvcid": "$NVMF_PORT", 00:22:17.248 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:17.248 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:17.248 "hdgst": ${hdgst:-false}, 00:22:17.248 "ddgst": ${ddgst:-false} 00:22:17.248 }, 00:22:17.248 "method": "bdev_nvme_attach_controller" 00:22:17.248 } 00:22:17.248 EOF 00:22:17.248 )") 00:22:17.248 11:40:54 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:22:17.248 11:40:54 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:22:17.248 11:40:54 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:22:17.248 11:40:54 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:22:17.248 11:40:54 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # local sanitizers 00:22:17.248 11:40:54 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:22:17.248 11:40:54 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # shift 00:22:17.248 11:40:54 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # cat 00:22:17.248 11:40:54 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # local asan_lib= 00:22:17.248 11:40:54 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:22:17.248 11:40:54 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:22:17.248 11:40:54 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:22:17.248 11:40:54 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:22:17.248 11:40:54 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:22:17.248 11:40:54 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # grep libasan 00:22:17.248 11:40:54 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:22:17.248 11:40:54 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@556 -- # jq . 00:22:17.248 11:40:54 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:22:17.248 11:40:54 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:22:17.248 11:40:54 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@557 -- # IFS=, 00:22:17.248 11:40:54 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:22:17.248 "params": { 00:22:17.248 "name": "Nvme0", 00:22:17.248 "trtype": "tcp", 00:22:17.248 "traddr": "10.0.0.2", 00:22:17.248 "adrfam": "ipv4", 00:22:17.248 "trsvcid": "4420", 00:22:17.248 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:22:17.248 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:22:17.248 "hdgst": false, 00:22:17.248 "ddgst": false 00:22:17.248 }, 00:22:17.248 "method": "bdev_nvme_attach_controller" 00:22:17.248 }' 00:22:17.248 11:40:54 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # asan_lib= 00:22:17.248 11:40:54 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:22:17.248 11:40:54 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:22:17.248 11:40:54 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:22:17.248 11:40:54 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:22:17.248 11:40:54 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:22:17.248 11:40:54 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # asan_lib= 00:22:17.248 11:40:54 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:22:17.248 11:40:54 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:22:17.248 11:40:54 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:22:17.506 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:22:17.506 fio-3.35 00:22:17.506 Starting 1 thread 00:22:29.745 00:22:29.745 filename0: (groupid=0, jobs=1): err= 0: pid=97877: Mon Jul 15 11:41:05 2024 00:22:29.745 read: IOPS=1093, BW=4375KiB/s (4480kB/s)(42.8MiB/10014msec) 00:22:29.745 slat (usec): min=4, max=168, avg= 9.94, stdev= 6.67 00:22:29.745 clat (usec): min=455, max=42317, avg=3626.38, stdev=10659.58 00:22:29.745 lat (usec): min=463, max=42330, avg=3636.32, stdev=10660.76 00:22:29.745 clat percentiles (usec): 00:22:29.745 | 1.00th=[ 461], 5.00th=[ 469], 10.00th=[ 474], 20.00th=[ 482], 00:22:29.745 | 30.00th=[ 490], 40.00th=[ 498], 50.00th=[ 515], 60.00th=[ 562], 00:22:29.745 | 70.00th=[ 627], 80.00th=[ 668], 90.00th=[ 1565], 95.00th=[40633], 00:22:29.745 | 99.00th=[41681], 99.50th=[41681], 99.90th=[42206], 99.95th=[42206], 00:22:29.745 | 99.99th=[42206] 00:22:29.745 bw ( KiB/s): min= 480, max=11264, per=100.00%, avg=4379.20, stdev=2732.00, samples=20 00:22:29.745 iops : min= 120, max= 2816, avg=1094.80, stdev=683.00, samples=20 00:22:29.745 lat (usec) : 500=40.39%, 750=47.02%, 1000=1.45% 00:22:29.745 lat (msec) : 2=2.92%, 4=0.69%, 10=0.04%, 50=7.49% 00:22:29.745 cpu : usr=90.08%, sys=8.85%, ctx=226, majf=0, minf=9 00:22:29.745 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:22:29.745 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:29.745 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:29.745 issued rwts: total=10952,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:29.745 latency : target=0, window=0, percentile=100.00%, depth=4 00:22:29.745 00:22:29.745 Run status group 0 (all jobs): 00:22:29.745 READ: bw=4375KiB/s (4480kB/s), 4375KiB/s-4375KiB/s (4480kB/s-4480kB/s), io=42.8MiB (44.9MB), run=10014-10014msec 00:22:29.745 11:41:05 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:22:29.745 11:41:05 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:22:29.745 11:41:05 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:22:29.745 11:41:05 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:22:29.745 11:41:05 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:22:29.745 11:41:05 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:22:29.745 11:41:05 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:29.745 11:41:05 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:22:29.745 11:41:05 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:29.745 11:41:05 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:22:29.745 11:41:05 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:29.745 11:41:05 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:22:29.745 11:41:05 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:29.745 00:22:29.745 real 0m10.919s 00:22:29.745 user 0m9.627s 00:22:29.745 sys 0m1.112s 00:22:29.745 11:41:05 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1124 -- # xtrace_disable 00:22:29.745 ************************************ 00:22:29.745 END TEST fio_dif_1_default 00:22:29.745 11:41:05 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:22:29.745 ************************************ 00:22:29.746 11:41:05 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:22:29.746 11:41:05 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:22:29.746 11:41:05 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:22:29.746 11:41:05 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:29.746 11:41:05 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:22:29.746 ************************************ 00:22:29.746 START TEST fio_dif_1_multi_subsystems 00:22:29.746 ************************************ 00:22:29.746 11:41:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1123 -- # fio_dif_1_multi_subsystems 00:22:29.746 11:41:05 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:22:29.746 11:41:05 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:22:29.746 11:41:05 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:22:29.746 11:41:05 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:22:29.746 11:41:05 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:22:29.746 11:41:05 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:22:29.746 11:41:05 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:22:29.746 11:41:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:29.746 11:41:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:22:29.746 bdev_null0 00:22:29.746 11:41:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:29.746 11:41:05 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:22:29.746 11:41:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:29.746 11:41:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:22:29.746 11:41:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:29.746 11:41:05 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:22:29.746 11:41:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:29.746 11:41:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:22:29.746 11:41:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:29.746 11:41:05 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:22:29.746 11:41:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:29.746 11:41:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:22:29.746 [2024-07-15 11:41:05.605579] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:29.746 11:41:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:29.746 11:41:05 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:22:29.746 11:41:05 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:22:29.746 11:41:05 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:22:29.746 11:41:05 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:22:29.746 11:41:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:29.746 11:41:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:22:29.746 bdev_null1 00:22:29.746 11:41:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:29.746 11:41:05 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:22:29.746 11:41:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:29.746 11:41:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:22:29.746 11:41:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:29.746 11:41:05 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:22:29.746 11:41:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:29.746 11:41:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:22:29.746 11:41:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:29.746 11:41:05 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:29.746 11:41:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:29.746 11:41:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:22:29.746 11:41:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:29.746 11:41:05 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:22:29.746 11:41:05 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:22:29.746 11:41:05 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:22:29.746 11:41:05 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # config=() 00:22:29.746 11:41:05 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # local subsystem config 00:22:29.746 11:41:05 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:29.746 11:41:05 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:22:29.746 11:41:05 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:22:29.746 11:41:05 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:29.746 { 00:22:29.746 "params": { 00:22:29.746 "name": "Nvme$subsystem", 00:22:29.746 "trtype": "$TEST_TRANSPORT", 00:22:29.746 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:29.746 "adrfam": "ipv4", 00:22:29.746 "trsvcid": "$NVMF_PORT", 00:22:29.746 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:29.746 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:29.746 "hdgst": ${hdgst:-false}, 00:22:29.746 "ddgst": ${ddgst:-false} 00:22:29.746 }, 00:22:29.746 "method": "bdev_nvme_attach_controller" 00:22:29.746 } 00:22:29.746 EOF 00:22:29.746 )") 00:22:29.746 11:41:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:22:29.746 11:41:05 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:22:29.746 11:41:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:22:29.746 11:41:05 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:22:29.746 11:41:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:22:29.746 11:41:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # local sanitizers 00:22:29.746 11:41:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:22:29.746 11:41:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # shift 00:22:29.746 11:41:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # local asan_lib= 00:22:29.746 11:41:05 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:22:29.746 11:41:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:22:29.746 11:41:05 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:22:29.746 11:41:05 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:22:29.746 11:41:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:22:29.746 11:41:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # grep libasan 00:22:29.746 11:41:05 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:22:29.746 11:41:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:22:29.746 11:41:05 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:29.746 11:41:05 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:29.746 { 00:22:29.746 "params": { 00:22:29.746 "name": "Nvme$subsystem", 00:22:29.746 "trtype": "$TEST_TRANSPORT", 00:22:29.746 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:29.746 "adrfam": "ipv4", 00:22:29.746 "trsvcid": "$NVMF_PORT", 00:22:29.746 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:29.746 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:29.746 "hdgst": ${hdgst:-false}, 00:22:29.746 "ddgst": ${ddgst:-false} 00:22:29.746 }, 00:22:29.746 "method": "bdev_nvme_attach_controller" 00:22:29.746 } 00:22:29.746 EOF 00:22:29.746 )") 00:22:29.746 11:41:05 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:22:29.746 11:41:05 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:22:29.746 11:41:05 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:22:29.746 11:41:05 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@556 -- # jq . 00:22:29.746 11:41:05 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@557 -- # IFS=, 00:22:29.746 11:41:05 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:22:29.746 "params": { 00:22:29.746 "name": "Nvme0", 00:22:29.746 "trtype": "tcp", 00:22:29.746 "traddr": "10.0.0.2", 00:22:29.746 "adrfam": "ipv4", 00:22:29.746 "trsvcid": "4420", 00:22:29.746 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:22:29.746 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:22:29.746 "hdgst": false, 00:22:29.746 "ddgst": false 00:22:29.746 }, 00:22:29.746 "method": "bdev_nvme_attach_controller" 00:22:29.746 },{ 00:22:29.746 "params": { 00:22:29.746 "name": "Nvme1", 00:22:29.746 "trtype": "tcp", 00:22:29.746 "traddr": "10.0.0.2", 00:22:29.746 "adrfam": "ipv4", 00:22:29.746 "trsvcid": "4420", 00:22:29.746 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:29.746 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:29.746 "hdgst": false, 00:22:29.746 "ddgst": false 00:22:29.746 }, 00:22:29.746 "method": "bdev_nvme_attach_controller" 00:22:29.746 }' 00:22:29.746 11:41:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # asan_lib= 00:22:29.746 11:41:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:22:29.746 11:41:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:22:29.746 11:41:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:22:29.747 11:41:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:22:29.747 11:41:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:22:29.747 11:41:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # asan_lib= 00:22:29.747 11:41:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:22:29.747 11:41:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:22:29.747 11:41:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:22:29.747 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:22:29.747 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:22:29.747 fio-3.35 00:22:29.747 Starting 2 threads 00:22:39.717 00:22:39.717 filename0: (groupid=0, jobs=1): err= 0: pid=98035: Mon Jul 15 11:41:16 2024 00:22:39.717 read: IOPS=458, BW=1834KiB/s (1878kB/s)(18.0MiB/10035msec) 00:22:39.717 slat (nsec): min=5050, max=71122, avg=11330.50, stdev=7839.81 00:22:39.717 clat (usec): min=472, max=42684, avg=8689.19, stdev=16036.03 00:22:39.717 lat (usec): min=480, max=42708, avg=8700.52, stdev=16037.40 00:22:39.717 clat percentiles (usec): 00:22:39.717 | 1.00th=[ 498], 5.00th=[ 545], 10.00th=[ 570], 20.00th=[ 611], 00:22:39.717 | 30.00th=[ 635], 40.00th=[ 660], 50.00th=[ 685], 60.00th=[ 1004], 00:22:39.717 | 70.00th=[ 1139], 80.00th=[ 1631], 90.00th=[41157], 95.00th=[41157], 00:22:39.717 | 99.00th=[41681], 99.50th=[42206], 99.90th=[42206], 99.95th=[42730], 00:22:39.717 | 99.99th=[42730] 00:22:39.717 bw ( KiB/s): min= 544, max= 7200, per=51.83%, avg=1838.40, stdev=1759.11, samples=20 00:22:39.717 iops : min= 136, max= 1800, avg=459.60, stdev=439.78, samples=20 00:22:39.717 lat (usec) : 500=1.02%, 750=55.39%, 1000=3.54% 00:22:39.717 lat (msec) : 2=20.30%, 4=0.09%, 10=0.09%, 50=19.57% 00:22:39.717 cpu : usr=94.20%, sys=4.90%, ctx=10, majf=0, minf=9 00:22:39.717 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:22:39.717 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:39.717 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:39.717 issued rwts: total=4600,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:39.717 latency : target=0, window=0, percentile=100.00%, depth=4 00:22:39.717 filename1: (groupid=0, jobs=1): err= 0: pid=98036: Mon Jul 15 11:41:16 2024 00:22:39.717 read: IOPS=429, BW=1717KiB/s (1758kB/s)(16.8MiB/10007msec) 00:22:39.717 slat (nsec): min=5019, max=73075, avg=11640.21, stdev=7842.68 00:22:39.717 clat (usec): min=464, max=42336, avg=9279.44, stdev=16433.67 00:22:39.717 lat (usec): min=472, max=42366, avg=9291.08, stdev=16434.82 00:22:39.717 clat percentiles (usec): 00:22:39.717 | 1.00th=[ 490], 5.00th=[ 523], 10.00th=[ 562], 20.00th=[ 611], 00:22:39.717 | 30.00th=[ 652], 40.00th=[ 693], 50.00th=[ 881], 60.00th=[ 1074], 00:22:39.717 | 70.00th=[ 1172], 80.00th=[40633], 90.00th=[41157], 95.00th=[41157], 00:22:39.717 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:22:39.717 | 99.99th=[42206] 00:22:39.717 bw ( KiB/s): min= 448, max= 5536, per=48.39%, avg=1716.80, stdev=1471.75, samples=20 00:22:39.717 iops : min= 112, max= 1384, avg=429.20, stdev=367.94, samples=20 00:22:39.717 lat (usec) : 500=2.05%, 750=41.50%, 1000=13.18% 00:22:39.717 lat (msec) : 2=22.23%, 10=0.09%, 50=20.95% 00:22:39.717 cpu : usr=94.25%, sys=4.91%, ctx=19, majf=0, minf=0 00:22:39.717 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:22:39.717 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:39.717 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:39.717 issued rwts: total=4296,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:39.717 latency : target=0, window=0, percentile=100.00%, depth=4 00:22:39.717 00:22:39.717 Run status group 0 (all jobs): 00:22:39.717 READ: bw=3546KiB/s (3631kB/s), 1717KiB/s-1834KiB/s (1758kB/s-1878kB/s), io=34.8MiB (36.4MB), run=10007-10035msec 00:22:39.717 11:41:16 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:22:39.717 11:41:16 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:22:39.717 11:41:16 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:22:39.717 11:41:16 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:22:39.717 11:41:16 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:22:39.717 11:41:16 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:22:39.717 11:41:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:39.717 11:41:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:22:39.717 11:41:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:39.717 11:41:16 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:22:39.717 11:41:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:39.717 11:41:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:22:39.717 11:41:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:39.717 11:41:16 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:22:39.717 11:41:16 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:22:39.717 11:41:16 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:22:39.717 11:41:16 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:39.717 11:41:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:39.717 11:41:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:22:39.717 11:41:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:39.717 11:41:16 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:22:39.717 11:41:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:39.717 11:41:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:22:39.717 11:41:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:39.717 00:22:39.717 real 0m11.127s 00:22:39.717 user 0m19.662s 00:22:39.717 sys 0m1.230s 00:22:39.717 11:41:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1124 -- # xtrace_disable 00:22:39.717 11:41:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:22:39.717 ************************************ 00:22:39.717 END TEST fio_dif_1_multi_subsystems 00:22:39.717 ************************************ 00:22:39.717 11:41:16 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:22:39.717 11:41:16 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:22:39.717 11:41:16 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:22:39.717 11:41:16 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:39.717 11:41:16 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:22:39.717 ************************************ 00:22:39.717 START TEST fio_dif_rand_params 00:22:39.717 ************************************ 00:22:39.717 11:41:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1123 -- # fio_dif_rand_params 00:22:39.717 11:41:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:22:39.717 11:41:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:22:39.717 11:41:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:22:39.717 11:41:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:22:39.717 11:41:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:22:39.717 11:41:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:22:39.717 11:41:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:22:39.717 11:41:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:22:39.717 11:41:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:22:39.717 11:41:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:22:39.717 11:41:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:22:39.717 11:41:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:22:39.717 11:41:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:22:39.717 11:41:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:39.717 11:41:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:22:39.717 bdev_null0 00:22:39.717 11:41:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:39.717 11:41:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:22:39.717 11:41:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:39.717 11:41:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:22:39.717 11:41:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:39.717 11:41:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:22:39.717 11:41:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:39.717 11:41:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:22:39.717 11:41:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:39.717 11:41:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:22:39.717 11:41:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:39.717 11:41:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:22:39.717 [2024-07-15 11:41:16.783923] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:39.717 11:41:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:39.717 11:41:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:22:39.717 11:41:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:22:39.717 11:41:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:22:39.717 11:41:16 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:22:39.717 11:41:16 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:22:39.717 11:41:16 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:39.717 11:41:16 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:39.717 { 00:22:39.717 "params": { 00:22:39.717 "name": "Nvme$subsystem", 00:22:39.717 "trtype": "$TEST_TRANSPORT", 00:22:39.717 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:39.717 "adrfam": "ipv4", 00:22:39.717 "trsvcid": "$NVMF_PORT", 00:22:39.717 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:39.717 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:39.717 "hdgst": ${hdgst:-false}, 00:22:39.717 "ddgst": ${ddgst:-false} 00:22:39.717 }, 00:22:39.717 "method": "bdev_nvme_attach_controller" 00:22:39.717 } 00:22:39.717 EOF 00:22:39.717 )") 00:22:39.717 11:41:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:22:39.717 11:41:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:22:39.717 11:41:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:22:39.717 11:41:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:22:39.717 11:41:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:22:39.717 11:41:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:22:39.717 11:41:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:22:39.717 11:41:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:22:39.717 11:41:16 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:22:39.717 11:41:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:22:39.717 11:41:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:22:39.717 11:41:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:22:39.717 11:41:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:22:39.717 11:41:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:22:39.717 11:41:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:22:39.717 11:41:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:22:39.717 11:41:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:22:39.717 11:41:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:22:39.717 11:41:16 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:22:39.717 11:41:16 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:22:39.717 11:41:16 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:22:39.717 "params": { 00:22:39.717 "name": "Nvme0", 00:22:39.717 "trtype": "tcp", 00:22:39.717 "traddr": "10.0.0.2", 00:22:39.717 "adrfam": "ipv4", 00:22:39.717 "trsvcid": "4420", 00:22:39.717 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:22:39.717 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:22:39.717 "hdgst": false, 00:22:39.717 "ddgst": false 00:22:39.717 }, 00:22:39.717 "method": "bdev_nvme_attach_controller" 00:22:39.717 }' 00:22:39.717 11:41:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:22:39.717 11:41:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:22:39.717 11:41:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:22:39.717 11:41:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:22:39.717 11:41:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:22:39.717 11:41:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:22:39.717 11:41:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:22:39.717 11:41:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:22:39.717 11:41:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:22:39.717 11:41:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:22:39.717 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:22:39.717 ... 00:22:39.717 fio-3.35 00:22:39.717 Starting 3 threads 00:22:46.278 00:22:46.278 filename0: (groupid=0, jobs=1): err= 0: pid=98189: Mon Jul 15 11:41:22 2024 00:22:46.278 read: IOPS=217, BW=27.2MiB/s (28.5MB/s)(136MiB/5008msec) 00:22:46.278 slat (nsec): min=4885, max=63374, avg=20037.75, stdev=9489.50 00:22:46.278 clat (usec): min=7207, max=65347, avg=13756.38, stdev=7723.95 00:22:46.278 lat (usec): min=7222, max=65396, avg=13776.42, stdev=7726.70 00:22:46.278 clat percentiles (usec): 00:22:46.278 | 1.00th=[ 7767], 5.00th=[ 8979], 10.00th=[10028], 20.00th=[10683], 00:22:46.278 | 30.00th=[11076], 40.00th=[11469], 50.00th=[11731], 60.00th=[12125], 00:22:46.278 | 70.00th=[12649], 80.00th=[13435], 90.00th=[17695], 95.00th=[24773], 00:22:46.278 | 99.00th=[53216], 99.50th=[55313], 99.90th=[65274], 99.95th=[65274], 00:22:46.278 | 99.99th=[65274] 00:22:46.278 bw ( KiB/s): min=18176, max=36096, per=36.58%, avg=27827.20, stdev=6171.26, samples=10 00:22:46.278 iops : min= 142, max= 282, avg=217.40, stdev=48.21, samples=10 00:22:46.278 lat (msec) : 10=10.09%, 20=80.73%, 50=6.70%, 100=2.48% 00:22:46.278 cpu : usr=91.47%, sys=6.63%, ctx=10, majf=0, minf=0 00:22:46.278 IO depths : 1=1.1%, 2=98.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:22:46.278 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:46.278 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:46.278 issued rwts: total=1090,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:46.278 latency : target=0, window=0, percentile=100.00%, depth=3 00:22:46.278 filename0: (groupid=0, jobs=1): err= 0: pid=98190: Mon Jul 15 11:41:22 2024 00:22:46.278 read: IOPS=183, BW=22.9MiB/s (24.0MB/s)(115MiB/5006msec) 00:22:46.278 slat (nsec): min=7875, max=75068, avg=22685.62, stdev=10510.74 00:22:46.278 clat (usec): min=5336, max=57523, avg=16346.34, stdev=5615.36 00:22:46.278 lat (usec): min=5361, max=57563, avg=16369.03, stdev=5616.86 00:22:46.278 clat percentiles (usec): 00:22:46.278 | 1.00th=[ 8586], 5.00th=[ 9241], 10.00th=[10028], 20.00th=[13566], 00:22:46.278 | 30.00th=[14615], 40.00th=[15270], 50.00th=[15795], 60.00th=[16188], 00:22:46.278 | 70.00th=[16909], 80.00th=[17695], 90.00th=[21890], 95.00th=[27919], 00:22:46.278 | 99.00th=[41157], 99.50th=[47973], 99.90th=[57410], 99.95th=[57410], 00:22:46.278 | 99.99th=[57410] 00:22:46.278 bw ( KiB/s): min=14336, max=29184, per=30.76%, avg=23398.40, stdev=4598.83, samples=10 00:22:46.278 iops : min= 112, max= 228, avg=182.80, stdev=35.93, samples=10 00:22:46.278 lat (msec) : 10=9.49%, 20=78.84%, 50=11.34%, 100=0.33% 00:22:46.278 cpu : usr=90.33%, sys=7.27%, ctx=100, majf=0, minf=9 00:22:46.278 IO depths : 1=2.8%, 2=97.2%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:22:46.278 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:46.278 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:46.278 issued rwts: total=917,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:46.278 latency : target=0, window=0, percentile=100.00%, depth=3 00:22:46.278 filename0: (groupid=0, jobs=1): err= 0: pid=98191: Mon Jul 15 11:41:22 2024 00:22:46.278 read: IOPS=193, BW=24.2MiB/s (25.4MB/s)(121MiB/5006msec) 00:22:46.278 slat (nsec): min=7871, max=85404, avg=21794.21, stdev=9701.68 00:22:46.278 clat (usec): min=7458, max=56566, avg=15462.57, stdev=7378.28 00:22:46.278 lat (usec): min=7479, max=56603, avg=15484.37, stdev=7381.15 00:22:46.278 clat percentiles (usec): 00:22:46.278 | 1.00th=[ 8160], 5.00th=[ 8979], 10.00th=[10945], 20.00th=[12256], 00:22:46.278 | 30.00th=[12780], 40.00th=[13304], 50.00th=[13829], 60.00th=[14222], 00:22:46.278 | 70.00th=[14746], 80.00th=[15533], 90.00th=[22414], 95.00th=[27395], 00:22:46.278 | 99.00th=[54264], 99.50th=[56361], 99.90th=[56361], 99.95th=[56361], 00:22:46.278 | 99.99th=[56361] 00:22:46.278 bw ( KiB/s): min=15616, max=31488, per=32.55%, avg=24755.20, stdev=5159.74, samples=10 00:22:46.278 iops : min= 122, max= 246, avg=193.40, stdev=40.31, samples=10 00:22:46.278 lat (msec) : 10=8.46%, 20=79.88%, 50=9.80%, 100=1.86% 00:22:46.278 cpu : usr=90.61%, sys=7.09%, ctx=10, majf=0, minf=9 00:22:46.278 IO depths : 1=1.7%, 2=98.3%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:22:46.278 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:46.278 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:46.278 issued rwts: total=969,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:46.278 latency : target=0, window=0, percentile=100.00%, depth=3 00:22:46.278 00:22:46.278 Run status group 0 (all jobs): 00:22:46.278 READ: bw=74.3MiB/s (77.9MB/s), 22.9MiB/s-27.2MiB/s (24.0MB/s-28.5MB/s), io=372MiB (390MB), run=5006-5008msec 00:22:46.278 11:41:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:22:46.278 11:41:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:22:46.278 11:41:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:22:46.278 11:41:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:22:46.278 11:41:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:22:46.278 11:41:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:22:46.278 11:41:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:46.278 11:41:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:22:46.278 11:41:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:46.278 11:41:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:22:46.278 11:41:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:46.278 11:41:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:22:46.278 11:41:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:46.278 11:41:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:22:46.278 11:41:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:22:46.278 11:41:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:22:46.278 11:41:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:22:46.278 11:41:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:22:46.278 11:41:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:22:46.278 11:41:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:22:46.278 11:41:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:22:46.278 11:41:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:22:46.278 11:41:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:22:46.278 11:41:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:22:46.278 11:41:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:22:46.278 11:41:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:46.278 11:41:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:22:46.278 bdev_null0 00:22:46.278 11:41:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:46.278 11:41:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:22:46.278 11:41:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:46.278 11:41:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:22:46.278 11:41:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:46.278 11:41:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:22:46.278 11:41:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:46.278 11:41:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:22:46.278 11:41:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:46.278 11:41:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:22:46.278 11:41:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:46.278 11:41:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:22:46.278 [2024-07-15 11:41:22.689280] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:46.278 11:41:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:46.278 11:41:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:22:46.278 11:41:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:22:46.278 11:41:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:22:46.278 11:41:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:22:46.279 11:41:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:46.279 11:41:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:22:46.279 bdev_null1 00:22:46.279 11:41:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:46.279 11:41:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:22:46.279 11:41:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:46.279 11:41:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:22:46.279 11:41:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:46.279 11:41:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:22:46.279 11:41:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:46.279 11:41:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:22:46.279 11:41:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:46.279 11:41:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:46.279 11:41:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:46.279 11:41:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:22:46.279 11:41:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:46.279 11:41:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:22:46.279 11:41:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:22:46.279 11:41:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:22:46.279 11:41:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:22:46.279 11:41:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:46.279 11:41:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:22:46.279 bdev_null2 00:22:46.279 11:41:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:46.279 11:41:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:22:46.279 11:41:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:46.279 11:41:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:22:46.279 11:41:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:46.279 11:41:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:22:46.279 11:41:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:46.279 11:41:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:22:46.279 11:41:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:46.279 11:41:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:22:46.279 11:41:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:46.279 11:41:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:22:46.279 11:41:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:46.279 11:41:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:22:46.279 11:41:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:22:46.279 11:41:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:22:46.279 11:41:22 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:22:46.279 11:41:22 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:22:46.279 11:41:22 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:46.279 11:41:22 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:46.279 { 00:22:46.279 "params": { 00:22:46.279 "name": "Nvme$subsystem", 00:22:46.279 "trtype": "$TEST_TRANSPORT", 00:22:46.279 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:46.279 "adrfam": "ipv4", 00:22:46.279 "trsvcid": "$NVMF_PORT", 00:22:46.279 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:46.279 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:46.279 "hdgst": ${hdgst:-false}, 00:22:46.279 "ddgst": ${ddgst:-false} 00:22:46.279 }, 00:22:46.279 "method": "bdev_nvme_attach_controller" 00:22:46.279 } 00:22:46.279 EOF 00:22:46.279 )") 00:22:46.279 11:41:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:22:46.279 11:41:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:22:46.279 11:41:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:22:46.279 11:41:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:22:46.279 11:41:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:22:46.279 11:41:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:22:46.279 11:41:22 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:22:46.279 11:41:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:22:46.279 11:41:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:22:46.279 11:41:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:22:46.279 11:41:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:22:46.279 11:41:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:22:46.279 11:41:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:22:46.279 11:41:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:22:46.279 11:41:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:22:46.279 11:41:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:22:46.279 11:41:22 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:46.279 11:41:22 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:46.279 { 00:22:46.279 "params": { 00:22:46.279 "name": "Nvme$subsystem", 00:22:46.279 "trtype": "$TEST_TRANSPORT", 00:22:46.279 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:46.279 "adrfam": "ipv4", 00:22:46.279 "trsvcid": "$NVMF_PORT", 00:22:46.279 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:46.279 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:46.279 "hdgst": ${hdgst:-false}, 00:22:46.279 "ddgst": ${ddgst:-false} 00:22:46.279 }, 00:22:46.279 "method": "bdev_nvme_attach_controller" 00:22:46.279 } 00:22:46.279 EOF 00:22:46.279 )") 00:22:46.279 11:41:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:22:46.279 11:41:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:22:46.279 11:41:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:22:46.279 11:41:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:22:46.279 11:41:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:22:46.279 11:41:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:22:46.279 11:41:22 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:22:46.279 11:41:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:22:46.279 11:41:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:22:46.279 11:41:22 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:46.279 11:41:22 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:46.279 { 00:22:46.279 "params": { 00:22:46.279 "name": "Nvme$subsystem", 00:22:46.279 "trtype": "$TEST_TRANSPORT", 00:22:46.279 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:46.279 "adrfam": "ipv4", 00:22:46.279 "trsvcid": "$NVMF_PORT", 00:22:46.279 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:46.279 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:46.279 "hdgst": ${hdgst:-false}, 00:22:46.279 "ddgst": ${ddgst:-false} 00:22:46.279 }, 00:22:46.279 "method": "bdev_nvme_attach_controller" 00:22:46.279 } 00:22:46.279 EOF 00:22:46.279 )") 00:22:46.279 11:41:22 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:22:46.279 11:41:22 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:22:46.279 11:41:22 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:22:46.279 11:41:22 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:22:46.279 "params": { 00:22:46.279 "name": "Nvme0", 00:22:46.279 "trtype": "tcp", 00:22:46.279 "traddr": "10.0.0.2", 00:22:46.279 "adrfam": "ipv4", 00:22:46.279 "trsvcid": "4420", 00:22:46.279 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:22:46.279 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:22:46.279 "hdgst": false, 00:22:46.279 "ddgst": false 00:22:46.279 }, 00:22:46.279 "method": "bdev_nvme_attach_controller" 00:22:46.279 },{ 00:22:46.279 "params": { 00:22:46.279 "name": "Nvme1", 00:22:46.279 "trtype": "tcp", 00:22:46.279 "traddr": "10.0.0.2", 00:22:46.279 "adrfam": "ipv4", 00:22:46.279 "trsvcid": "4420", 00:22:46.279 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:46.279 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:46.279 "hdgst": false, 00:22:46.279 "ddgst": false 00:22:46.279 }, 00:22:46.279 "method": "bdev_nvme_attach_controller" 00:22:46.279 },{ 00:22:46.279 "params": { 00:22:46.279 "name": "Nvme2", 00:22:46.279 "trtype": "tcp", 00:22:46.279 "traddr": "10.0.0.2", 00:22:46.279 "adrfam": "ipv4", 00:22:46.279 "trsvcid": "4420", 00:22:46.279 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:22:46.279 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:22:46.279 "hdgst": false, 00:22:46.279 "ddgst": false 00:22:46.279 }, 00:22:46.279 "method": "bdev_nvme_attach_controller" 00:22:46.279 }' 00:22:46.279 11:41:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:22:46.279 11:41:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:22:46.279 11:41:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:22:46.279 11:41:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:22:46.279 11:41:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:22:46.279 11:41:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:22:46.279 11:41:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:22:46.279 11:41:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:22:46.279 11:41:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:22:46.280 11:41:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:22:46.280 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:22:46.280 ... 00:22:46.280 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:22:46.280 ... 00:22:46.280 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:22:46.280 ... 00:22:46.280 fio-3.35 00:22:46.280 Starting 24 threads 00:23:01.173 00:23:01.173 filename0: (groupid=0, jobs=1): err= 0: pid=98285: Mon Jul 15 11:41:37 2024 00:23:01.173 read: IOPS=78, BW=314KiB/s (321kB/s)(3204KiB/10215msec) 00:23:01.173 slat (usec): min=4, max=4047, avg=33.29, stdev=231.59 00:23:01.173 clat (msec): min=39, max=526, avg=203.30, stdev=115.01 00:23:01.173 lat (msec): min=39, max=526, avg=203.34, stdev=115.01 00:23:01.173 clat percentiles (msec): 00:23:01.174 | 1.00th=[ 48], 5.00th=[ 70], 10.00th=[ 72], 20.00th=[ 89], 00:23:01.174 | 30.00th=[ 110], 40.00th=[ 138], 50.00th=[ 169], 60.00th=[ 243], 00:23:01.174 | 70.00th=[ 288], 80.00th=[ 313], 90.00th=[ 355], 95.00th=[ 393], 00:23:01.174 | 99.00th=[ 481], 99.50th=[ 506], 99.90th=[ 527], 99.95th=[ 527], 00:23:01.174 | 99.99th=[ 527] 00:23:01.174 bw ( KiB/s): min= 128, max= 920, per=3.06%, avg=313.90, stdev=196.14, samples=20 00:23:01.174 iops : min= 32, max= 230, avg=78.45, stdev=49.03, samples=20 00:23:01.174 lat (msec) : 50=1.25%, 100=24.59%, 250=38.20%, 500=34.96%, 750=1.00% 00:23:01.174 cpu : usr=41.39%, sys=1.74%, ctx=1330, majf=0, minf=9 00:23:01.174 IO depths : 1=0.4%, 2=1.0%, 4=6.9%, 8=78.0%, 16=13.7%, 32=0.0%, >=64=0.0% 00:23:01.174 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:01.174 complete : 0=0.0%, 4=89.3%, 8=6.8%, 16=3.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:01.174 issued rwts: total=801,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:01.174 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:01.174 filename0: (groupid=0, jobs=1): err= 0: pid=98286: Mon Jul 15 11:41:37 2024 00:23:01.174 read: IOPS=90, BW=360KiB/s (369kB/s)(3680KiB/10218msec) 00:23:01.174 slat (nsec): min=4830, max=89352, avg=16399.16, stdev=10913.49 00:23:01.174 clat (msec): min=7, max=503, avg=176.44, stdev=105.94 00:23:01.174 lat (msec): min=7, max=503, avg=176.46, stdev=105.95 00:23:01.174 clat percentiles (msec): 00:23:01.174 | 1.00th=[ 8], 5.00th=[ 14], 10.00th=[ 50], 20.00th=[ 73], 00:23:01.174 | 30.00th=[ 100], 40.00th=[ 125], 50.00th=[ 153], 60.00th=[ 213], 00:23:01.174 | 70.00th=[ 247], 80.00th=[ 292], 90.00th=[ 321], 95.00th=[ 330], 00:23:01.174 | 99.00th=[ 405], 99.50th=[ 451], 99.90th=[ 502], 99.95th=[ 502], 00:23:01.174 | 99.99th=[ 502] 00:23:01.174 bw ( KiB/s): min= 176, max= 1152, per=3.53%, avg=361.45, stdev=254.11, samples=20 00:23:01.174 iops : min= 44, max= 288, avg=90.35, stdev=63.53, samples=20 00:23:01.174 lat (msec) : 10=3.48%, 20=1.74%, 50=5.43%, 100=20.43%, 250=40.00% 00:23:01.174 lat (msec) : 500=28.48%, 750=0.43% 00:23:01.174 cpu : usr=42.96%, sys=1.93%, ctx=1350, majf=0, minf=9 00:23:01.174 IO depths : 1=1.1%, 2=2.2%, 4=9.1%, 8=75.8%, 16=11.8%, 32=0.0%, >=64=0.0% 00:23:01.174 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:01.174 complete : 0=0.0%, 4=89.6%, 8=5.2%, 16=5.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:01.174 issued rwts: total=920,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:01.174 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:01.174 filename0: (groupid=0, jobs=1): err= 0: pid=98287: Mon Jul 15 11:41:37 2024 00:23:01.174 read: IOPS=72, BW=289KiB/s (296kB/s)(2944KiB/10200msec) 00:23:01.174 slat (nsec): min=5482, max=61142, avg=15260.27, stdev=8280.93 00:23:01.174 clat (msec): min=52, max=541, avg=221.58, stdev=132.93 00:23:01.174 lat (msec): min=52, max=541, avg=221.59, stdev=132.93 00:23:01.174 clat percentiles (msec): 00:23:01.174 | 1.00th=[ 53], 5.00th=[ 59], 10.00th=[ 65], 20.00th=[ 81], 00:23:01.174 | 30.00th=[ 111], 40.00th=[ 157], 50.00th=[ 226], 60.00th=[ 264], 00:23:01.174 | 70.00th=[ 279], 80.00th=[ 338], 90.00th=[ 397], 95.00th=[ 510], 00:23:01.174 | 99.00th=[ 542], 99.50th=[ 542], 99.90th=[ 542], 99.95th=[ 542], 00:23:01.174 | 99.99th=[ 542] 00:23:01.174 bw ( KiB/s): min= 128, max= 896, per=2.80%, avg=287.85, stdev=210.20, samples=20 00:23:01.174 iops : min= 32, max= 224, avg=71.95, stdev=52.54, samples=20 00:23:01.174 lat (msec) : 100=26.09%, 250=28.26%, 500=40.62%, 750=5.03% 00:23:01.174 cpu : usr=37.89%, sys=1.50%, ctx=1104, majf=0, minf=9 00:23:01.174 IO depths : 1=5.0%, 2=10.1%, 4=22.0%, 8=55.4%, 16=7.5%, 32=0.0%, >=64=0.0% 00:23:01.174 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:01.174 complete : 0=0.0%, 4=93.0%, 8=1.1%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:01.174 issued rwts: total=736,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:01.174 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:01.174 filename0: (groupid=0, jobs=1): err= 0: pid=98288: Mon Jul 15 11:41:37 2024 00:23:01.174 read: IOPS=215, BW=863KiB/s (883kB/s)(8640KiB/10017msec) 00:23:01.174 slat (usec): min=3, max=4051, avg=18.44, stdev=87.64 00:23:01.174 clat (msec): min=20, max=637, avg=74.06, stdev=121.73 00:23:01.174 lat (msec): min=20, max=637, avg=74.08, stdev=121.73 00:23:01.174 clat percentiles (msec): 00:23:01.174 | 1.00th=[ 24], 5.00th=[ 24], 10.00th=[ 24], 20.00th=[ 25], 00:23:01.174 | 30.00th=[ 25], 40.00th=[ 26], 50.00th=[ 31], 60.00th=[ 31], 00:23:01.174 | 70.00th=[ 36], 80.00th=[ 37], 90.00th=[ 317], 95.00th=[ 409], 00:23:01.174 | 99.00th=[ 527], 99.50th=[ 527], 99.90th=[ 642], 99.95th=[ 642], 00:23:01.174 | 99.99th=[ 642] 00:23:01.174 bw ( KiB/s): min= 88, max= 2600, per=7.75%, avg=794.63, stdev=980.76, samples=19 00:23:01.174 iops : min= 22, max= 650, avg=198.63, stdev=245.21, samples=19 00:23:01.174 lat (msec) : 50=86.67%, 250=1.48%, 500=10.79%, 750=1.06% 00:23:01.174 cpu : usr=59.11%, sys=2.62%, ctx=1106, majf=0, minf=9 00:23:01.174 IO depths : 1=0.3%, 2=0.7%, 4=13.3%, 8=73.4%, 16=12.2%, 32=0.0%, >=64=0.0% 00:23:01.174 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:01.174 complete : 0=0.0%, 4=89.2%, 8=5.2%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:01.174 issued rwts: total=2160,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:01.174 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:01.174 filename0: (groupid=0, jobs=1): err= 0: pid=98289: Mon Jul 15 11:41:37 2024 00:23:01.174 read: IOPS=61, BW=247KiB/s (253kB/s)(2520KiB/10184msec) 00:23:01.174 slat (nsec): min=3787, max=52509, avg=16297.82, stdev=8678.10 00:23:01.174 clat (msec): min=45, max=522, avg=258.49, stdev=148.05 00:23:01.174 lat (msec): min=45, max=522, avg=258.51, stdev=148.05 00:23:01.174 clat percentiles (msec): 00:23:01.174 | 1.00th=[ 46], 5.00th=[ 62], 10.00th=[ 73], 20.00th=[ 88], 00:23:01.174 | 30.00th=[ 144], 40.00th=[ 169], 50.00th=[ 257], 60.00th=[ 317], 00:23:01.174 | 70.00th=[ 393], 80.00th=[ 422], 90.00th=[ 464], 95.00th=[ 485], 00:23:01.174 | 99.00th=[ 502], 99.50th=[ 502], 99.90th=[ 523], 99.95th=[ 523], 00:23:01.174 | 99.99th=[ 523] 00:23:01.174 bw ( KiB/s): min= 128, max= 848, per=2.39%, avg=245.50, stdev=182.37, samples=20 00:23:01.174 iops : min= 32, max= 212, avg=61.35, stdev=45.59, samples=20 00:23:01.174 lat (msec) : 50=4.13%, 100=21.75%, 250=23.65%, 500=47.94%, 750=2.54% 00:23:01.174 cpu : usr=36.81%, sys=1.55%, ctx=1340, majf=0, minf=9 00:23:01.174 IO depths : 1=3.3%, 2=7.1%, 4=17.6%, 8=61.6%, 16=10.3%, 32=0.0%, >=64=0.0% 00:23:01.174 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:01.174 complete : 0=0.0%, 4=91.7%, 8=3.7%, 16=4.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:01.174 issued rwts: total=630,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:01.174 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:01.174 filename0: (groupid=0, jobs=1): err= 0: pid=98290: Mon Jul 15 11:41:37 2024 00:23:01.174 read: IOPS=64, BW=257KiB/s (264kB/s)(2624KiB/10193msec) 00:23:01.174 slat (nsec): min=4644, max=71463, avg=13982.36, stdev=7731.74 00:23:01.174 clat (msec): min=39, max=561, avg=248.48, stdev=145.11 00:23:01.174 lat (msec): min=39, max=561, avg=248.49, stdev=145.11 00:23:01.174 clat percentiles (msec): 00:23:01.174 | 1.00th=[ 57], 5.00th=[ 66], 10.00th=[ 71], 20.00th=[ 95], 00:23:01.174 | 30.00th=[ 144], 40.00th=[ 169], 50.00th=[ 230], 60.00th=[ 271], 00:23:01.174 | 70.00th=[ 338], 80.00th=[ 405], 90.00th=[ 464], 95.00th=[ 493], 00:23:01.174 | 99.00th=[ 502], 99.50th=[ 502], 99.90th=[ 558], 99.95th=[ 558], 00:23:01.174 | 99.99th=[ 558] 00:23:01.174 bw ( KiB/s): min= 128, max= 768, per=2.49%, avg=255.90, stdev=184.83, samples=20 00:23:01.174 iops : min= 32, max= 192, avg=63.95, stdev=46.18, samples=20 00:23:01.174 lat (msec) : 50=0.76%, 100=21.95%, 250=27.29%, 500=47.26%, 750=2.74% 00:23:01.174 cpu : usr=37.62%, sys=1.14%, ctx=1109, majf=0, minf=9 00:23:01.174 IO depths : 1=3.7%, 2=7.6%, 4=19.2%, 8=60.4%, 16=9.1%, 32=0.0%, >=64=0.0% 00:23:01.174 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:01.174 complete : 0=0.0%, 4=91.9%, 8=2.7%, 16=5.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:01.174 issued rwts: total=656,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:01.174 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:01.174 filename0: (groupid=0, jobs=1): err= 0: pid=98291: Mon Jul 15 11:41:37 2024 00:23:01.174 read: IOPS=72, BW=291KiB/s (298kB/s)(2968KiB/10194msec) 00:23:01.174 slat (usec): min=4, max=341, avg=15.50, stdev=20.09 00:23:01.174 clat (msec): min=35, max=544, avg=219.68, stdev=124.31 00:23:01.174 lat (msec): min=36, max=544, avg=219.70, stdev=124.31 00:23:01.174 clat percentiles (msec): 00:23:01.174 | 1.00th=[ 45], 5.00th=[ 61], 10.00th=[ 80], 20.00th=[ 96], 00:23:01.174 | 30.00th=[ 109], 40.00th=[ 163], 50.00th=[ 190], 60.00th=[ 264], 00:23:01.174 | 70.00th=[ 296], 80.00th=[ 338], 90.00th=[ 388], 95.00th=[ 426], 00:23:01.174 | 99.00th=[ 542], 99.50th=[ 542], 99.90th=[ 542], 99.95th=[ 542], 00:23:01.174 | 99.99th=[ 542] 00:23:01.174 bw ( KiB/s): min= 128, max= 864, per=2.83%, avg=290.45, stdev=186.91, samples=20 00:23:01.174 iops : min= 32, max= 216, avg=72.60, stdev=46.70, samples=20 00:23:01.174 lat (msec) : 50=2.43%, 100=23.85%, 250=29.78%, 500=41.78%, 750=2.16% 00:23:01.174 cpu : usr=39.74%, sys=1.75%, ctx=1211, majf=0, minf=9 00:23:01.174 IO depths : 1=4.0%, 2=8.8%, 4=19.5%, 8=59.0%, 16=8.6%, 32=0.0%, >=64=0.0% 00:23:01.174 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:01.174 complete : 0=0.0%, 4=92.8%, 8=1.6%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:01.174 issued rwts: total=742,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:01.174 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:01.174 filename0: (groupid=0, jobs=1): err= 0: pid=98292: Mon Jul 15 11:41:37 2024 00:23:01.174 read: IOPS=222, BW=889KiB/s (910kB/s)(8896KiB/10011msec) 00:23:01.174 slat (usec): min=4, max=8045, avg=16.88, stdev=170.49 00:23:01.174 clat (msec): min=12, max=675, avg=71.82, stdev=126.64 00:23:01.174 lat (msec): min=12, max=675, avg=71.84, stdev=126.64 00:23:01.174 clat percentiles (msec): 00:23:01.174 | 1.00th=[ 14], 5.00th=[ 24], 10.00th=[ 24], 20.00th=[ 25], 00:23:01.174 | 30.00th=[ 25], 40.00th=[ 25], 50.00th=[ 30], 60.00th=[ 31], 00:23:01.174 | 70.00th=[ 31], 80.00th=[ 37], 90.00th=[ 255], 95.00th=[ 426], 00:23:01.174 | 99.00th=[ 527], 99.50th=[ 527], 99.90th=[ 676], 99.95th=[ 676], 00:23:01.174 | 99.99th=[ 676] 00:23:01.175 bw ( KiB/s): min= 128, max= 2560, per=7.71%, avg=790.84, stdev=995.82, samples=19 00:23:01.175 iops : min= 32, max= 640, avg=197.68, stdev=248.93, samples=19 00:23:01.175 lat (msec) : 20=4.32%, 50=83.45%, 100=0.72%, 250=1.44%, 500=7.55% 00:23:01.175 lat (msec) : 750=2.52% 00:23:01.175 cpu : usr=56.16%, sys=2.30%, ctx=1159, majf=0, minf=9 00:23:01.175 IO depths : 1=6.2%, 2=12.4%, 4=24.9%, 8=50.3%, 16=6.3%, 32=0.0%, >=64=0.0% 00:23:01.175 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:01.175 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:01.175 issued rwts: total=2224,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:01.175 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:01.175 filename1: (groupid=0, jobs=1): err= 0: pid=98293: Mon Jul 15 11:41:37 2024 00:23:01.175 read: IOPS=227, BW=908KiB/s (930kB/s)(9088KiB/10004msec) 00:23:01.175 slat (nsec): min=4712, max=51598, avg=16815.28, stdev=5516.78 00:23:01.175 clat (msec): min=4, max=647, avg=70.28, stdev=129.12 00:23:01.175 lat (msec): min=4, max=647, avg=70.30, stdev=129.12 00:23:01.175 clat percentiles (msec): 00:23:01.175 | 1.00th=[ 9], 5.00th=[ 14], 10.00th=[ 24], 20.00th=[ 25], 00:23:01.175 | 30.00th=[ 25], 40.00th=[ 25], 50.00th=[ 26], 60.00th=[ 31], 00:23:01.175 | 70.00th=[ 31], 80.00th=[ 37], 90.00th=[ 190], 95.00th=[ 435], 00:23:01.175 | 99.00th=[ 550], 99.50th=[ 651], 99.90th=[ 651], 99.95th=[ 651], 00:23:01.175 | 99.99th=[ 651] 00:23:01.175 bw ( KiB/s): min= 126, max= 2560, per=8.29%, avg=849.67, stdev=1004.48, samples=18 00:23:01.175 iops : min= 31, max= 640, avg=212.39, stdev=251.14, samples=18 00:23:01.175 lat (msec) : 10=2.82%, 20=4.23%, 50=81.69%, 250=2.11%, 500=6.34% 00:23:01.175 lat (msec) : 750=2.82% 00:23:01.175 cpu : usr=53.29%, sys=2.60%, ctx=599, majf=0, minf=9 00:23:01.175 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:23:01.175 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:01.175 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:01.175 issued rwts: total=2272,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:01.175 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:01.175 filename1: (groupid=0, jobs=1): err= 0: pid=98294: Mon Jul 15 11:41:37 2024 00:23:01.175 read: IOPS=87, BW=351KiB/s (359kB/s)(3584KiB/10218msec) 00:23:01.175 slat (usec): min=7, max=8038, avg=28.10, stdev=299.83 00:23:01.175 clat (msec): min=7, max=462, avg=182.28, stdev=129.67 00:23:01.175 lat (msec): min=7, max=462, avg=182.30, stdev=129.67 00:23:01.175 clat percentiles (msec): 00:23:01.175 | 1.00th=[ 8], 5.00th=[ 13], 10.00th=[ 41], 20.00th=[ 68], 00:23:01.175 | 30.00th=[ 83], 40.00th=[ 112], 50.00th=[ 140], 60.00th=[ 201], 00:23:01.175 | 70.00th=[ 284], 80.00th=[ 321], 90.00th=[ 388], 95.00th=[ 422], 00:23:01.175 | 99.00th=[ 464], 99.50th=[ 464], 99.90th=[ 464], 99.95th=[ 464], 00:23:01.175 | 99.99th=[ 464] 00:23:01.175 bw ( KiB/s): min= 126, max= 1142, per=3.43%, avg=351.40, stdev=287.19, samples=20 00:23:01.175 iops : min= 31, max= 285, avg=87.80, stdev=71.75, samples=20 00:23:01.175 lat (msec) : 10=3.57%, 20=1.79%, 50=8.82%, 100=19.98%, 250=34.60% 00:23:01.175 lat (msec) : 500=31.25% 00:23:01.175 cpu : usr=41.89%, sys=1.61%, ctx=1381, majf=0, minf=9 00:23:01.175 IO depths : 1=2.2%, 2=4.6%, 4=12.5%, 8=70.1%, 16=10.6%, 32=0.0%, >=64=0.0% 00:23:01.175 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:01.175 complete : 0=0.0%, 4=90.6%, 8=4.1%, 16=5.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:01.175 issued rwts: total=896,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:01.175 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:01.175 filename1: (groupid=0, jobs=1): err= 0: pid=98295: Mon Jul 15 11:41:37 2024 00:23:01.175 read: IOPS=81, BW=325KiB/s (333kB/s)(3264KiB/10032msec) 00:23:01.175 slat (usec): min=4, max=4045, avg=18.71, stdev=141.36 00:23:01.175 clat (msec): min=36, max=543, avg=196.55, stdev=171.73 00:23:01.175 lat (msec): min=36, max=543, avg=196.57, stdev=171.73 00:23:01.175 clat percentiles (msec): 00:23:01.175 | 1.00th=[ 37], 5.00th=[ 37], 10.00th=[ 37], 20.00th=[ 37], 00:23:01.175 | 30.00th=[ 37], 40.00th=[ 38], 50.00th=[ 165], 60.00th=[ 259], 00:23:01.175 | 70.00th=[ 300], 80.00th=[ 368], 90.00th=[ 464], 95.00th=[ 506], 00:23:01.175 | 99.00th=[ 542], 99.50th=[ 542], 99.90th=[ 542], 99.95th=[ 542], 00:23:01.175 | 99.99th=[ 542] 00:23:01.175 bw ( KiB/s): min= 128, max= 1412, per=2.50%, avg=256.21, stdev=290.58, samples=19 00:23:01.175 iops : min= 32, max= 353, avg=64.05, stdev=72.65, samples=19 00:23:01.175 lat (msec) : 50=45.10%, 250=13.24%, 500=35.17%, 750=6.50% 00:23:01.175 cpu : usr=44.93%, sys=1.77%, ctx=1120, majf=0, minf=9 00:23:01.175 IO depths : 1=5.4%, 2=10.9%, 4=23.0%, 8=53.6%, 16=7.1%, 32=0.0%, >=64=0.0% 00:23:01.175 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:01.175 complete : 0=0.0%, 4=93.5%, 8=0.7%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:01.175 issued rwts: total=816,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:01.175 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:01.175 filename1: (groupid=0, jobs=1): err= 0: pid=98296: Mon Jul 15 11:41:37 2024 00:23:01.175 read: IOPS=71, BW=287KiB/s (294kB/s)(2924KiB/10186msec) 00:23:01.175 slat (usec): min=4, max=4041, avg=20.42, stdev=149.24 00:23:01.175 clat (msec): min=43, max=779, avg=222.05, stdev=135.57 00:23:01.175 lat (msec): min=43, max=779, avg=222.07, stdev=135.57 00:23:01.175 clat percentiles (msec): 00:23:01.175 | 1.00th=[ 44], 5.00th=[ 61], 10.00th=[ 69], 20.00th=[ 92], 00:23:01.175 | 30.00th=[ 129], 40.00th=[ 159], 50.00th=[ 192], 60.00th=[ 259], 00:23:01.175 | 70.00th=[ 271], 80.00th=[ 338], 90.00th=[ 422], 95.00th=[ 506], 00:23:01.175 | 99.00th=[ 542], 99.50th=[ 776], 99.90th=[ 776], 99.95th=[ 776], 00:23:01.175 | 99.99th=[ 776] 00:23:01.175 bw ( KiB/s): min= 88, max= 896, per=2.79%, avg=286.00, stdev=193.31, samples=20 00:23:01.175 iops : min= 22, max= 224, avg=71.50, stdev=48.33, samples=20 00:23:01.175 lat (msec) : 50=2.19%, 100=19.43%, 250=35.29%, 500=38.03%, 750=4.38% 00:23:01.175 lat (msec) : 1000=0.68% 00:23:01.175 cpu : usr=43.18%, sys=1.73%, ctx=1310, majf=0, minf=9 00:23:01.175 IO depths : 1=3.7%, 2=7.5%, 4=19.8%, 8=60.2%, 16=8.8%, 32=0.0%, >=64=0.0% 00:23:01.175 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:01.175 complete : 0=0.0%, 4=92.0%, 8=2.3%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:01.175 issued rwts: total=731,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:01.175 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:01.175 filename1: (groupid=0, jobs=1): err= 0: pid=98297: Mon Jul 15 11:41:37 2024 00:23:01.175 read: IOPS=212, BW=851KiB/s (871kB/s)(8512KiB/10007msec) 00:23:01.175 slat (usec): min=4, max=8144, avg=20.23, stdev=176.44 00:23:01.175 clat (msec): min=23, max=635, avg=75.05, stdev=131.82 00:23:01.175 lat (msec): min=23, max=635, avg=75.07, stdev=131.83 00:23:01.175 clat percentiles (msec): 00:23:01.175 | 1.00th=[ 24], 5.00th=[ 24], 10.00th=[ 24], 20.00th=[ 25], 00:23:01.175 | 30.00th=[ 25], 40.00th=[ 26], 50.00th=[ 30], 60.00th=[ 31], 00:23:01.175 | 70.00th=[ 32], 80.00th=[ 37], 90.00th=[ 326], 95.00th=[ 456], 00:23:01.175 | 99.00th=[ 527], 99.50th=[ 634], 99.90th=[ 634], 99.95th=[ 634], 00:23:01.175 | 99.99th=[ 634] 00:23:01.175 bw ( KiB/s): min= 128, max= 2560, per=7.64%, avg=782.11, stdev=989.20, samples=19 00:23:01.175 iops : min= 32, max= 640, avg=195.53, stdev=247.30, samples=19 00:23:01.175 lat (msec) : 50=87.97%, 250=1.50%, 500=9.02%, 750=1.50% 00:23:01.175 cpu : usr=53.38%, sys=2.23%, ctx=606, majf=0, minf=9 00:23:01.175 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:23:01.175 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:01.175 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:01.175 issued rwts: total=2128,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:01.175 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:01.175 filename1: (groupid=0, jobs=1): err= 0: pid=98298: Mon Jul 15 11:41:37 2024 00:23:01.175 read: IOPS=85, BW=344KiB/s (352kB/s)(3488KiB/10147msec) 00:23:01.175 slat (usec): min=3, max=8050, avg=29.32, stdev=319.31 00:23:01.175 clat (msec): min=8, max=438, avg=185.90, stdev=127.71 00:23:01.175 lat (msec): min=8, max=438, avg=185.93, stdev=127.71 00:23:01.175 clat percentiles (msec): 00:23:01.175 | 1.00th=[ 20], 5.00th=[ 36], 10.00th=[ 47], 20.00th=[ 60], 00:23:01.175 | 30.00th=[ 69], 40.00th=[ 110], 50.00th=[ 144], 60.00th=[ 226], 00:23:01.175 | 70.00th=[ 275], 80.00th=[ 330], 90.00th=[ 359], 95.00th=[ 405], 00:23:01.175 | 99.00th=[ 439], 99.50th=[ 439], 99.90th=[ 439], 99.95th=[ 439], 00:23:01.175 | 99.99th=[ 439] 00:23:01.175 bw ( KiB/s): min= 126, max= 1376, per=3.34%, avg=342.30, stdev=316.53, samples=20 00:23:01.175 iops : min= 31, max= 344, avg=85.55, stdev=79.15, samples=20 00:23:01.175 lat (msec) : 10=0.80%, 20=1.03%, 50=15.48%, 100=20.53%, 250=23.39% 00:23:01.175 lat (msec) : 500=38.76% 00:23:01.175 cpu : usr=36.80%, sys=1.49%, ctx=1075, majf=0, minf=9 00:23:01.175 IO depths : 1=3.1%, 2=6.4%, 4=15.1%, 8=65.6%, 16=9.7%, 32=0.0%, >=64=0.0% 00:23:01.175 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:01.175 complete : 0=0.0%, 4=91.5%, 8=3.1%, 16=5.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:01.175 issued rwts: total=872,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:01.175 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:01.175 filename1: (groupid=0, jobs=1): err= 0: pid=98299: Mon Jul 15 11:41:37 2024 00:23:01.175 read: IOPS=227, BW=908KiB/s (930kB/s)(9088KiB/10007msec) 00:23:01.175 slat (nsec): min=4954, max=51202, avg=15999.31, stdev=6736.91 00:23:01.175 clat (msec): min=8, max=650, avg=70.31, stdev=127.20 00:23:01.175 lat (msec): min=8, max=650, avg=70.33, stdev=127.20 00:23:01.175 clat percentiles (msec): 00:23:01.175 | 1.00th=[ 9], 5.00th=[ 14], 10.00th=[ 24], 20.00th=[ 25], 00:23:01.175 | 30.00th=[ 25], 40.00th=[ 25], 50.00th=[ 26], 60.00th=[ 31], 00:23:01.175 | 70.00th=[ 31], 80.00th=[ 37], 90.00th=[ 218], 95.00th=[ 430], 00:23:01.175 | 99.00th=[ 518], 99.50th=[ 651], 99.90th=[ 651], 99.95th=[ 651], 00:23:01.175 | 99.99th=[ 651] 00:23:01.175 bw ( KiB/s): min= 126, max= 2560, per=8.67%, avg=888.94, stdev=1019.89, samples=17 00:23:01.175 iops : min= 31, max= 640, avg=222.18, stdev=254.97, samples=17 00:23:01.175 lat (msec) : 10=2.11%, 20=4.23%, 50=82.39%, 250=1.85%, 500=7.70% 00:23:01.175 lat (msec) : 750=1.72% 00:23:01.175 cpu : usr=53.69%, sys=2.14%, ctx=607, majf=0, minf=9 00:23:01.176 IO depths : 1=6.1%, 2=12.3%, 4=24.8%, 8=50.4%, 16=6.4%, 32=0.0%, >=64=0.0% 00:23:01.176 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:01.176 complete : 0=0.0%, 4=94.0%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:01.176 issued rwts: total=2272,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:01.176 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:01.176 filename1: (groupid=0, jobs=1): err= 0: pid=98300: Mon Jul 15 11:41:37 2024 00:23:01.176 read: IOPS=74, BW=298KiB/s (305kB/s)(3032KiB/10185msec) 00:23:01.176 slat (usec): min=8, max=4030, avg=27.16, stdev=146.13 00:23:01.176 clat (msec): min=48, max=541, avg=213.59, stdev=125.85 00:23:01.176 lat (msec): min=48, max=541, avg=213.62, stdev=125.85 00:23:01.176 clat percentiles (msec): 00:23:01.176 | 1.00th=[ 49], 5.00th=[ 64], 10.00th=[ 71], 20.00th=[ 87], 00:23:01.176 | 30.00th=[ 111], 40.00th=[ 138], 50.00th=[ 182], 60.00th=[ 264], 00:23:01.176 | 70.00th=[ 305], 80.00th=[ 338], 90.00th=[ 397], 95.00th=[ 430], 00:23:01.176 | 99.00th=[ 498], 99.50th=[ 542], 99.90th=[ 542], 99.95th=[ 542], 00:23:01.176 | 99.99th=[ 542] 00:23:01.176 bw ( KiB/s): min= 128, max= 808, per=2.89%, avg=296.55, stdev=192.58, samples=20 00:23:01.176 iops : min= 32, max= 202, avg=74.10, stdev=48.15, samples=20 00:23:01.176 lat (msec) : 50=2.11%, 100=24.54%, 250=32.45%, 500=40.24%, 750=0.66% 00:23:01.176 cpu : usr=44.74%, sys=1.76%, ctx=1258, majf=0, minf=9 00:23:01.176 IO depths : 1=2.1%, 2=4.4%, 4=14.5%, 8=68.5%, 16=10.6%, 32=0.0%, >=64=0.0% 00:23:01.176 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:01.176 complete : 0=0.0%, 4=90.6%, 8=3.9%, 16=5.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:01.176 issued rwts: total=758,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:01.176 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:01.176 filename2: (groupid=0, jobs=1): err= 0: pid=98301: Mon Jul 15 11:41:37 2024 00:23:01.176 read: IOPS=129, BW=517KiB/s (530kB/s)(5184KiB/10025msec) 00:23:01.176 slat (nsec): min=4872, max=63526, avg=14257.79, stdev=6718.63 00:23:01.176 clat (msec): min=27, max=619, avg=123.62, stdev=148.15 00:23:01.176 lat (msec): min=27, max=619, avg=123.64, stdev=148.15 00:23:01.176 clat percentiles (msec): 00:23:01.176 | 1.00th=[ 28], 5.00th=[ 30], 10.00th=[ 31], 20.00th=[ 31], 00:23:01.176 | 30.00th=[ 31], 40.00th=[ 37], 50.00th=[ 37], 60.00th=[ 37], 00:23:01.176 | 70.00th=[ 142], 80.00th=[ 266], 90.00th=[ 368], 95.00th=[ 430], 00:23:01.176 | 99.00th=[ 617], 99.50th=[ 617], 99.90th=[ 617], 99.95th=[ 617], 00:23:01.176 | 99.99th=[ 617] 00:23:01.176 bw ( KiB/s): min= 128, max= 2048, per=4.45%, avg=456.67, stdev=584.64, samples=18 00:23:01.176 iops : min= 32, max= 512, avg=114.17, stdev=146.16, samples=18 00:23:01.176 lat (msec) : 50=66.67%, 100=1.23%, 250=9.88%, 500=19.60%, 750=2.62% 00:23:01.176 cpu : usr=45.12%, sys=1.80%, ctx=808, majf=0, minf=9 00:23:01.176 IO depths : 1=5.9%, 2=12.2%, 4=25.0%, 8=50.3%, 16=6.6%, 32=0.0%, >=64=0.0% 00:23:01.176 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:01.176 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:01.176 issued rwts: total=1296,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:01.176 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:01.176 filename2: (groupid=0, jobs=1): err= 0: pid=98302: Mon Jul 15 11:41:37 2024 00:23:01.176 read: IOPS=74, BW=297KiB/s (304kB/s)(3032KiB/10203msec) 00:23:01.176 slat (nsec): min=4746, max=72323, avg=24686.05, stdev=12491.43 00:23:01.176 clat (msec): min=35, max=531, avg=215.19, stdev=131.63 00:23:01.176 lat (msec): min=35, max=531, avg=215.22, stdev=131.63 00:23:01.176 clat percentiles (msec): 00:23:01.176 | 1.00th=[ 36], 5.00th=[ 48], 10.00th=[ 58], 20.00th=[ 74], 00:23:01.176 | 30.00th=[ 118], 40.00th=[ 157], 50.00th=[ 184], 60.00th=[ 255], 00:23:01.176 | 70.00th=[ 292], 80.00th=[ 359], 90.00th=[ 401], 95.00th=[ 430], 00:23:01.176 | 99.00th=[ 498], 99.50th=[ 531], 99.90th=[ 531], 99.95th=[ 531], 00:23:01.176 | 99.99th=[ 531] 00:23:01.176 bw ( KiB/s): min= 128, max= 896, per=2.89%, avg=296.70, stdev=219.41, samples=20 00:23:01.176 iops : min= 32, max= 224, avg=74.15, stdev=54.86, samples=20 00:23:01.176 lat (msec) : 50=6.20%, 100=20.05%, 250=31.79%, 500=41.42%, 750=0.53% 00:23:01.176 cpu : usr=31.48%, sys=1.29%, ctx=873, majf=0, minf=9 00:23:01.176 IO depths : 1=2.2%, 2=4.7%, 4=13.6%, 8=68.6%, 16=10.8%, 32=0.0%, >=64=0.0% 00:23:01.176 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:01.176 complete : 0=0.0%, 4=91.0%, 8=3.9%, 16=5.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:01.176 issued rwts: total=758,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:01.176 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:01.176 filename2: (groupid=0, jobs=1): err= 0: pid=98303: Mon Jul 15 11:41:37 2024 00:23:01.176 read: IOPS=81, BW=325KiB/s (333kB/s)(3320KiB/10208msec) 00:23:01.176 slat (nsec): min=4986, max=75680, avg=15175.94, stdev=10375.36 00:23:01.176 clat (msec): min=33, max=503, avg=196.12, stdev=112.37 00:23:01.176 lat (msec): min=33, max=503, avg=196.13, stdev=112.37 00:23:01.176 clat percentiles (msec): 00:23:01.176 | 1.00th=[ 35], 5.00th=[ 59], 10.00th=[ 70], 20.00th=[ 84], 00:23:01.176 | 30.00th=[ 108], 40.00th=[ 138], 50.00th=[ 174], 60.00th=[ 222], 00:23:01.176 | 70.00th=[ 275], 80.00th=[ 300], 90.00th=[ 342], 95.00th=[ 409], 00:23:01.176 | 99.00th=[ 472], 99.50th=[ 477], 99.90th=[ 506], 99.95th=[ 506], 00:23:01.176 | 99.99th=[ 506] 00:23:01.176 bw ( KiB/s): min= 128, max= 912, per=3.17%, avg=325.50, stdev=202.71, samples=20 00:23:01.176 iops : min= 32, max= 228, avg=81.35, stdev=50.67, samples=20 00:23:01.176 lat (msec) : 50=3.01%, 100=25.54%, 250=37.71%, 500=33.25%, 750=0.48% 00:23:01.176 cpu : usr=41.99%, sys=1.69%, ctx=1275, majf=0, minf=9 00:23:01.176 IO depths : 1=0.5%, 2=1.2%, 4=6.9%, 8=78.1%, 16=13.4%, 32=0.0%, >=64=0.0% 00:23:01.176 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:01.176 complete : 0=0.0%, 4=89.2%, 8=6.7%, 16=4.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:01.176 issued rwts: total=830,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:01.176 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:01.176 filename2: (groupid=0, jobs=1): err= 0: pid=98304: Mon Jul 15 11:41:37 2024 00:23:01.176 read: IOPS=70, BW=284KiB/s (290kB/s)(2892KiB/10196msec) 00:23:01.176 slat (usec): min=4, max=8049, avg=24.88, stdev=298.94 00:23:01.176 clat (msec): min=49, max=467, avg=225.35, stdev=129.96 00:23:01.176 lat (msec): min=49, max=467, avg=225.38, stdev=129.95 00:23:01.176 clat percentiles (msec): 00:23:01.176 | 1.00th=[ 61], 5.00th=[ 70], 10.00th=[ 72], 20.00th=[ 87], 00:23:01.176 | 30.00th=[ 107], 40.00th=[ 157], 50.00th=[ 209], 60.00th=[ 279], 00:23:01.176 | 70.00th=[ 321], 80.00th=[ 359], 90.00th=[ 426], 95.00th=[ 451], 00:23:01.176 | 99.00th=[ 468], 99.50th=[ 468], 99.90th=[ 468], 99.95th=[ 468], 00:23:01.176 | 99.99th=[ 468] 00:23:01.176 bw ( KiB/s): min= 128, max= 768, per=2.75%, avg=282.80, stdev=189.36, samples=20 00:23:01.176 iops : min= 32, max= 192, avg=70.70, stdev=47.34, samples=20 00:23:01.176 lat (msec) : 50=0.97%, 100=27.52%, 250=29.46%, 500=42.05% 00:23:01.176 cpu : usr=35.35%, sys=1.30%, ctx=1135, majf=0, minf=9 00:23:01.176 IO depths : 1=3.3%, 2=7.2%, 4=17.0%, 8=62.9%, 16=9.5%, 32=0.0%, >=64=0.0% 00:23:01.176 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:01.176 complete : 0=0.0%, 4=92.0%, 8=2.7%, 16=5.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:01.176 issued rwts: total=723,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:01.176 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:01.176 filename2: (groupid=0, jobs=1): err= 0: pid=98305: Mon Jul 15 11:41:37 2024 00:23:01.176 read: IOPS=71, BW=286KiB/s (293kB/s)(2912KiB/10192msec) 00:23:01.176 slat (usec): min=5, max=8037, avg=37.49, stdev=419.84 00:23:01.176 clat (msec): min=42, max=526, avg=223.57, stdev=138.47 00:23:01.176 lat (msec): min=42, max=526, avg=223.61, stdev=138.48 00:23:01.176 clat percentiles (msec): 00:23:01.176 | 1.00th=[ 46], 5.00th=[ 58], 10.00th=[ 63], 20.00th=[ 80], 00:23:01.176 | 30.00th=[ 107], 40.00th=[ 150], 50.00th=[ 188], 60.00th=[ 288], 00:23:01.176 | 70.00th=[ 321], 80.00th=[ 372], 90.00th=[ 409], 95.00th=[ 456], 00:23:01.176 | 99.00th=[ 527], 99.50th=[ 527], 99.90th=[ 527], 99.95th=[ 527], 00:23:01.176 | 99.99th=[ 527] 00:23:01.176 bw ( KiB/s): min= 128, max= 936, per=2.77%, avg=284.65, stdev=205.92, samples=20 00:23:01.176 iops : min= 32, max= 234, avg=71.10, stdev=51.47, samples=20 00:23:01.176 lat (msec) : 50=4.12%, 100=23.90%, 250=29.67%, 500=40.11%, 750=2.20% 00:23:01.176 cpu : usr=31.66%, sys=1.12%, ctx=869, majf=0, minf=9 00:23:01.176 IO depths : 1=3.2%, 2=6.5%, 4=15.7%, 8=64.7%, 16=10.0%, 32=0.0%, >=64=0.0% 00:23:01.176 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:01.176 complete : 0=0.0%, 4=91.5%, 8=3.5%, 16=5.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:01.176 issued rwts: total=728,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:01.176 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:01.176 filename2: (groupid=0, jobs=1): err= 0: pid=98306: Mon Jul 15 11:41:37 2024 00:23:01.176 read: IOPS=61, BW=247KiB/s (253kB/s)(2520KiB/10194msec) 00:23:01.176 slat (usec): min=8, max=113, avg=26.67, stdev=15.67 00:23:01.176 clat (msec): min=32, max=723, avg=258.07, stdev=152.62 00:23:01.176 lat (msec): min=32, max=723, avg=258.09, stdev=152.62 00:23:01.176 clat percentiles (msec): 00:23:01.176 | 1.00th=[ 48], 5.00th=[ 61], 10.00th=[ 85], 20.00th=[ 101], 00:23:01.176 | 30.00th=[ 132], 40.00th=[ 171], 50.00th=[ 268], 60.00th=[ 313], 00:23:01.176 | 70.00th=[ 347], 80.00th=[ 414], 90.00th=[ 460], 95.00th=[ 510], 00:23:01.176 | 99.00th=[ 642], 99.50th=[ 651], 99.90th=[ 726], 99.95th=[ 726], 00:23:01.176 | 99.99th=[ 726] 00:23:01.176 bw ( KiB/s): min= 88, max= 816, per=2.39%, avg=245.50, stdev=179.71, samples=20 00:23:01.176 iops : min= 22, max= 204, avg=61.35, stdev=44.93, samples=20 00:23:01.176 lat (msec) : 50=2.54%, 100=17.94%, 250=29.05%, 500=43.81%, 750=6.67% 00:23:01.176 cpu : usr=31.82%, sys=1.52%, ctx=882, majf=0, minf=9 00:23:01.176 IO depths : 1=1.7%, 2=3.8%, 4=12.2%, 8=69.8%, 16=12.4%, 32=0.0%, >=64=0.0% 00:23:01.176 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:01.176 complete : 0=0.0%, 4=90.7%, 8=5.1%, 16=4.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:01.176 issued rwts: total=630,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:01.176 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:01.176 filename2: (groupid=0, jobs=1): err= 0: pid=98307: Mon Jul 15 11:41:37 2024 00:23:01.176 read: IOPS=81, BW=324KiB/s (332kB/s)(3312KiB/10210msec) 00:23:01.176 slat (usec): min=3, max=10352, avg=26.54, stdev=359.41 00:23:01.176 clat (msec): min=34, max=515, avg=195.11, stdev=108.55 00:23:01.177 lat (msec): min=34, max=515, avg=195.14, stdev=108.57 00:23:01.177 clat percentiles (msec): 00:23:01.177 | 1.00th=[ 35], 5.00th=[ 57], 10.00th=[ 71], 20.00th=[ 82], 00:23:01.177 | 30.00th=[ 108], 40.00th=[ 144], 50.00th=[ 180], 60.00th=[ 234], 00:23:01.177 | 70.00th=[ 264], 80.00th=[ 305], 90.00th=[ 334], 95.00th=[ 363], 00:23:01.177 | 99.00th=[ 447], 99.50th=[ 472], 99.90th=[ 514], 99.95th=[ 514], 00:23:01.177 | 99.99th=[ 514] 00:23:01.177 bw ( KiB/s): min= 176, max= 888, per=3.16%, avg=324.35, stdev=202.32, samples=20 00:23:01.177 iops : min= 44, max= 222, avg=81.05, stdev=50.60, samples=20 00:23:01.177 lat (msec) : 50=3.74%, 100=23.55%, 250=38.16%, 500=34.06%, 750=0.48% 00:23:01.177 cpu : usr=34.52%, sys=1.40%, ctx=1009, majf=0, minf=9 00:23:01.177 IO depths : 1=0.8%, 2=1.7%, 4=8.1%, 8=77.2%, 16=12.2%, 32=0.0%, >=64=0.0% 00:23:01.177 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:01.177 complete : 0=0.0%, 4=89.4%, 8=5.7%, 16=4.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:01.177 issued rwts: total=828,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:01.177 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:01.177 filename2: (groupid=0, jobs=1): err= 0: pid=98308: Mon Jul 15 11:41:37 2024 00:23:01.177 read: IOPS=73, BW=295KiB/s (302kB/s)(3008KiB/10205msec) 00:23:01.177 slat (usec): min=4, max=4043, avg=19.89, stdev=147.15 00:23:01.177 clat (msec): min=42, max=546, avg=216.88, stdev=132.65 00:23:01.177 lat (msec): min=42, max=546, avg=216.90, stdev=132.65 00:23:01.177 clat percentiles (msec): 00:23:01.177 | 1.00th=[ 43], 5.00th=[ 64], 10.00th=[ 69], 20.00th=[ 81], 00:23:01.177 | 30.00th=[ 109], 40.00th=[ 132], 50.00th=[ 192], 60.00th=[ 249], 00:23:01.177 | 70.00th=[ 313], 80.00th=[ 347], 90.00th=[ 414], 95.00th=[ 439], 00:23:01.177 | 99.00th=[ 531], 99.50th=[ 550], 99.90th=[ 550], 99.95th=[ 550], 00:23:01.177 | 99.99th=[ 550] 00:23:01.177 bw ( KiB/s): min= 126, max= 896, per=2.87%, avg=294.35, stdev=201.42, samples=20 00:23:01.177 iops : min= 31, max= 224, avg=73.55, stdev=50.35, samples=20 00:23:01.177 lat (msec) : 50=1.33%, 100=26.99%, 250=31.78%, 500=37.90%, 750=1.99% 00:23:01.177 cpu : usr=36.03%, sys=1.41%, ctx=1030, majf=0, minf=9 00:23:01.177 IO depths : 1=2.8%, 2=6.1%, 4=17.6%, 8=63.7%, 16=9.8%, 32=0.0%, >=64=0.0% 00:23:01.177 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:01.177 complete : 0=0.0%, 4=91.6%, 8=2.8%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:01.177 issued rwts: total=752,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:01.177 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:01.177 00:23:01.177 Run status group 0 (all jobs): 00:23:01.177 READ: bw=10.0MiB/s (10.5MB/s), 247KiB/s-908KiB/s (253kB/s-930kB/s), io=102MiB (107MB), run=10004-10218msec 00:23:01.177 11:41:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:23:01.177 11:41:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:23:01.177 11:41:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:23:01.177 11:41:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:23:01.177 11:41:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:23:01.177 11:41:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:23:01.177 11:41:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:01.177 11:41:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:01.177 11:41:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:01.177 11:41:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:23:01.177 11:41:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:01.177 11:41:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:01.177 11:41:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:01.177 11:41:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:23:01.177 11:41:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:23:01.177 11:41:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:23:01.177 11:41:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:01.177 11:41:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:01.177 11:41:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:01.177 11:41:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:01.177 11:41:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:23:01.177 11:41:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:01.177 11:41:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:01.177 11:41:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:01.177 11:41:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:23:01.177 11:41:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:23:01.177 11:41:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:23:01.177 11:41:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:23:01.177 11:41:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:01.177 11:41:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:01.177 11:41:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:01.177 11:41:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:23:01.177 11:41:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:01.177 11:41:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:01.177 11:41:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:01.177 11:41:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:23:01.177 11:41:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:23:01.177 11:41:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:23:01.177 11:41:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:23:01.177 11:41:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:23:01.177 11:41:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:23:01.177 11:41:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:23:01.177 11:41:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:23:01.177 11:41:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:23:01.177 11:41:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:23:01.177 11:41:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:23:01.177 11:41:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:23:01.177 11:41:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:01.177 11:41:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:01.177 bdev_null0 00:23:01.177 11:41:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:01.177 11:41:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:23:01.177 11:41:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:01.177 11:41:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:01.177 11:41:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:01.177 11:41:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:23:01.177 11:41:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:01.177 11:41:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:01.177 11:41:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:01.177 11:41:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:23:01.177 11:41:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:01.177 11:41:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:01.177 [2024-07-15 11:41:37.687382] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:01.177 11:41:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:01.177 11:41:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:23:01.177 11:41:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:23:01.177 11:41:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:23:01.177 11:41:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:23:01.177 11:41:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:01.177 11:41:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:01.177 bdev_null1 00:23:01.177 11:41:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:01.177 11:41:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:23:01.177 11:41:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:01.177 11:41:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:01.177 11:41:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:01.177 11:41:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:23:01.177 11:41:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:01.177 11:41:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:01.177 11:41:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:01.177 11:41:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:01.177 11:41:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:01.177 11:41:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:01.177 11:41:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:01.177 11:41:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:23:01.177 11:41:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:23:01.177 11:41:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:23:01.177 11:41:37 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:23:01.177 11:41:37 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:23:01.178 11:41:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:23:01.178 11:41:37 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:01.178 11:41:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:23:01.178 11:41:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:23:01.178 11:41:37 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:01.178 { 00:23:01.178 "params": { 00:23:01.178 "name": "Nvme$subsystem", 00:23:01.178 "trtype": "$TEST_TRANSPORT", 00:23:01.178 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:01.178 "adrfam": "ipv4", 00:23:01.178 "trsvcid": "$NVMF_PORT", 00:23:01.178 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:01.178 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:01.178 "hdgst": ${hdgst:-false}, 00:23:01.178 "ddgst": ${ddgst:-false} 00:23:01.178 }, 00:23:01.178 "method": "bdev_nvme_attach_controller" 00:23:01.178 } 00:23:01.178 EOF 00:23:01.178 )") 00:23:01.178 11:41:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:23:01.178 11:41:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:23:01.178 11:41:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:23:01.178 11:41:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:23:01.178 11:41:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:23:01.178 11:41:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:23:01.178 11:41:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:23:01.178 11:41:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:23:01.178 11:41:37 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:23:01.178 11:41:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:23:01.178 11:41:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:23:01.178 11:41:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:23:01.178 11:41:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:23:01.178 11:41:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:23:01.178 11:41:37 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:01.178 11:41:37 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:01.178 { 00:23:01.178 "params": { 00:23:01.178 "name": "Nvme$subsystem", 00:23:01.178 "trtype": "$TEST_TRANSPORT", 00:23:01.178 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:01.178 "adrfam": "ipv4", 00:23:01.178 "trsvcid": "$NVMF_PORT", 00:23:01.178 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:01.178 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:01.178 "hdgst": ${hdgst:-false}, 00:23:01.178 "ddgst": ${ddgst:-false} 00:23:01.178 }, 00:23:01.178 "method": "bdev_nvme_attach_controller" 00:23:01.178 } 00:23:01.178 EOF 00:23:01.178 )") 00:23:01.178 11:41:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:23:01.178 11:41:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:23:01.178 11:41:37 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:23:01.178 11:41:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:23:01.178 11:41:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:23:01.178 11:41:37 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:23:01.178 11:41:37 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:23:01.178 11:41:37 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:23:01.178 "params": { 00:23:01.178 "name": "Nvme0", 00:23:01.178 "trtype": "tcp", 00:23:01.178 "traddr": "10.0.0.2", 00:23:01.178 "adrfam": "ipv4", 00:23:01.178 "trsvcid": "4420", 00:23:01.178 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:23:01.178 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:23:01.178 "hdgst": false, 00:23:01.178 "ddgst": false 00:23:01.178 }, 00:23:01.178 "method": "bdev_nvme_attach_controller" 00:23:01.178 },{ 00:23:01.178 "params": { 00:23:01.178 "name": "Nvme1", 00:23:01.178 "trtype": "tcp", 00:23:01.178 "traddr": "10.0.0.2", 00:23:01.178 "adrfam": "ipv4", 00:23:01.178 "trsvcid": "4420", 00:23:01.178 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:01.178 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:01.178 "hdgst": false, 00:23:01.178 "ddgst": false 00:23:01.178 }, 00:23:01.178 "method": "bdev_nvme_attach_controller" 00:23:01.178 }' 00:23:01.178 11:41:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:23:01.178 11:41:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:23:01.178 11:41:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:23:01.178 11:41:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:23:01.178 11:41:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:23:01.178 11:41:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:23:01.178 11:41:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:23:01.178 11:41:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:23:01.178 11:41:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:23:01.178 11:41:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:23:01.178 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:23:01.178 ... 00:23:01.178 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:23:01.178 ... 00:23:01.178 fio-3.35 00:23:01.178 Starting 4 threads 00:23:06.432 00:23:06.432 filename0: (groupid=0, jobs=1): err= 0: pid=98464: Mon Jul 15 11:41:43 2024 00:23:06.432 read: IOPS=644, BW=5154KiB/s (5278kB/s)(25.2MiB/5004msec) 00:23:06.432 slat (nsec): min=6789, max=66579, avg=20217.30, stdev=7889.60 00:23:06.432 clat (usec): min=3605, max=17220, avg=12308.59, stdev=1429.39 00:23:06.432 lat (usec): min=3611, max=17235, avg=12328.81, stdev=1430.09 00:23:06.432 clat percentiles (usec): 00:23:06.432 | 1.00th=[ 4113], 5.00th=[11076], 10.00th=[11469], 20.00th=[11600], 00:23:06.432 | 30.00th=[12125], 40.00th=[12387], 50.00th=[12518], 60.00th=[12649], 00:23:06.432 | 70.00th=[12911], 80.00th=[13173], 90.00th=[13435], 95.00th=[13566], 00:23:06.432 | 99.00th=[13698], 99.50th=[13829], 99.90th=[16712], 99.95th=[16909], 00:23:06.432 | 99.99th=[17171] 00:23:06.432 bw ( KiB/s): min= 4736, max= 5632, per=25.00%, avg=5145.60, stdev=334.87, samples=10 00:23:06.432 iops : min= 592, max= 704, avg=643.20, stdev=41.86, samples=10 00:23:06.432 lat (msec) : 4=0.37%, 10=2.17%, 20=97.46% 00:23:06.432 cpu : usr=92.90%, sys=5.94%, ctx=7, majf=0, minf=0 00:23:06.432 IO depths : 1=10.3%, 2=25.0%, 4=50.0%, 8=14.7%, 16=0.0%, 32=0.0%, >=64=0.0% 00:23:06.432 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:06.433 complete : 0=0.0%, 4=89.1%, 8=10.9%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:06.433 issued rwts: total=3224,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:06.433 latency : target=0, window=0, percentile=100.00%, depth=8 00:23:06.433 filename0: (groupid=0, jobs=1): err= 0: pid=98465: Mon Jul 15 11:41:43 2024 00:23:06.433 read: IOPS=642, BW=5143KiB/s (5266kB/s)(25.1MiB/5003msec) 00:23:06.433 slat (nsec): min=5040, max=51821, avg=18764.38, stdev=6965.18 00:23:06.433 clat (usec): min=2230, max=22486, avg=12350.94, stdev=1538.84 00:23:06.433 lat (usec): min=2247, max=22513, avg=12369.71, stdev=1538.11 00:23:06.433 clat percentiles (usec): 00:23:06.433 | 1.00th=[ 4113], 5.00th=[10945], 10.00th=[11469], 20.00th=[11600], 00:23:06.433 | 30.00th=[12125], 40.00th=[12387], 50.00th=[12518], 60.00th=[12780], 00:23:06.433 | 70.00th=[12911], 80.00th=[13173], 90.00th=[13566], 95.00th=[13698], 00:23:06.433 | 99.00th=[15926], 99.50th=[16712], 99.90th=[17171], 99.95th=[17171], 00:23:06.433 | 99.99th=[22414] 00:23:06.433 bw ( KiB/s): min= 4864, max= 5493, per=24.67%, avg=5076.11, stdev=235.00, samples=9 00:23:06.433 iops : min= 608, max= 686, avg=634.44, stdev=29.24, samples=9 00:23:06.433 lat (msec) : 4=0.31%, 10=2.86%, 20=96.80%, 50=0.03% 00:23:06.433 cpu : usr=94.24%, sys=4.64%, ctx=12, majf=0, minf=9 00:23:06.433 IO depths : 1=10.4%, 2=25.0%, 4=50.0%, 8=14.6%, 16=0.0%, 32=0.0%, >=64=0.0% 00:23:06.433 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:06.433 complete : 0=0.0%, 4=89.1%, 8=10.9%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:06.433 issued rwts: total=3216,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:06.433 latency : target=0, window=0, percentile=100.00%, depth=8 00:23:06.433 filename1: (groupid=0, jobs=1): err= 0: pid=98466: Mon Jul 15 11:41:43 2024 00:23:06.433 read: IOPS=642, BW=5143KiB/s (5266kB/s)(25.1MiB/5003msec) 00:23:06.433 slat (usec): min=4, max=199, avg=16.46, stdev= 6.39 00:23:06.433 clat (usec): min=2674, max=20383, avg=12359.63, stdev=1534.85 00:23:06.433 lat (usec): min=2691, max=20398, avg=12376.09, stdev=1533.79 00:23:06.433 clat percentiles (usec): 00:23:06.433 | 1.00th=[ 4113], 5.00th=[11076], 10.00th=[11469], 20.00th=[11600], 00:23:06.433 | 30.00th=[12125], 40.00th=[12387], 50.00th=[12518], 60.00th=[12780], 00:23:06.433 | 70.00th=[12911], 80.00th=[13173], 90.00th=[13566], 95.00th=[13698], 00:23:06.433 | 99.00th=[15926], 99.50th=[16909], 99.90th=[18220], 99.95th=[19268], 00:23:06.433 | 99.99th=[20317] 00:23:06.433 bw ( KiB/s): min= 4864, max= 5504, per=24.67%, avg=5077.33, stdev=239.60, samples=9 00:23:06.433 iops : min= 608, max= 688, avg=634.67, stdev=29.95, samples=9 00:23:06.433 lat (msec) : 4=0.28%, 10=2.92%, 20=96.77%, 50=0.03% 00:23:06.433 cpu : usr=94.12%, sys=4.38%, ctx=63, majf=0, minf=9 00:23:06.433 IO depths : 1=9.9%, 2=25.0%, 4=50.0%, 8=15.1%, 16=0.0%, 32=0.0%, >=64=0.0% 00:23:06.433 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:06.433 complete : 0=0.0%, 4=89.1%, 8=10.9%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:06.433 issued rwts: total=3216,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:06.433 latency : target=0, window=0, percentile=100.00%, depth=8 00:23:06.433 filename1: (groupid=0, jobs=1): err= 0: pid=98467: Mon Jul 15 11:41:43 2024 00:23:06.433 read: IOPS=642, BW=5143KiB/s (5266kB/s)(25.1MiB/5003msec) 00:23:06.433 slat (nsec): min=7814, max=50218, avg=13611.04, stdev=7350.51 00:23:06.433 clat (usec): min=4003, max=16868, avg=12375.33, stdev=1393.42 00:23:06.433 lat (usec): min=4013, max=16883, avg=12388.94, stdev=1393.24 00:23:06.433 clat percentiles (usec): 00:23:06.433 | 1.00th=[ 4178], 5.00th=[11076], 10.00th=[11469], 20.00th=[11731], 00:23:06.433 | 30.00th=[12125], 40.00th=[12387], 50.00th=[12518], 60.00th=[12780], 00:23:06.433 | 70.00th=[12911], 80.00th=[13173], 90.00th=[13566], 95.00th=[13698], 00:23:06.433 | 99.00th=[14353], 99.50th=[15795], 99.90th=[16581], 99.95th=[16909], 00:23:06.433 | 99.99th=[16909] 00:23:06.433 bw ( KiB/s): min= 4848, max= 5515, per=24.74%, avg=5091.00, stdev=266.11, samples=9 00:23:06.433 iops : min= 606, max= 689, avg=636.33, stdev=33.19, samples=9 00:23:06.433 lat (msec) : 10=3.14%, 20=96.86% 00:23:06.433 cpu : usr=94.12%, sys=4.88%, ctx=9, majf=0, minf=0 00:23:06.433 IO depths : 1=7.7%, 2=25.0%, 4=50.0%, 8=17.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:23:06.433 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:06.433 complete : 0=0.0%, 4=89.3%, 8=10.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:06.433 issued rwts: total=3216,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:06.433 latency : target=0, window=0, percentile=100.00%, depth=8 00:23:06.433 00:23:06.433 Run status group 0 (all jobs): 00:23:06.433 READ: bw=20.1MiB/s (21.1MB/s), 5143KiB/s-5154KiB/s (5266kB/s-5278kB/s), io=101MiB (105MB), run=5003-5004msec 00:23:06.433 11:41:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:23:06.433 11:41:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:23:06.433 11:41:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:23:06.433 11:41:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:23:06.433 11:41:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:23:06.433 11:41:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:23:06.433 11:41:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:06.433 11:41:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:06.433 11:41:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:06.433 11:41:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:23:06.433 11:41:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:06.433 11:41:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:06.433 11:41:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:06.433 11:41:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:23:06.433 11:41:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:23:06.433 11:41:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:23:06.433 11:41:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:06.433 11:41:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:06.433 11:41:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:06.433 11:41:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:06.433 11:41:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:23:06.433 11:41:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:06.433 11:41:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:06.433 11:41:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:06.433 00:23:06.433 real 0m27.007s 00:23:06.433 user 2m28.527s 00:23:06.433 sys 0m6.982s 00:23:06.433 11:41:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1124 -- # xtrace_disable 00:23:06.433 11:41:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:06.433 ************************************ 00:23:06.433 END TEST fio_dif_rand_params 00:23:06.433 ************************************ 00:23:06.433 11:41:43 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:23:06.433 11:41:43 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:23:06.433 11:41:43 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:23:06.433 11:41:43 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:06.433 11:41:43 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:23:06.433 ************************************ 00:23:06.433 START TEST fio_dif_digest 00:23:06.433 ************************************ 00:23:06.433 11:41:43 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1123 -- # fio_dif_digest 00:23:06.433 11:41:43 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:23:06.433 11:41:43 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:23:06.433 11:41:43 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:23:06.433 11:41:43 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:23:06.433 11:41:43 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:23:06.433 11:41:43 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:23:06.433 11:41:43 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:23:06.433 11:41:43 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:23:06.433 11:41:43 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:23:06.433 11:41:43 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:23:06.433 11:41:43 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:23:06.433 11:41:43 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:23:06.433 11:41:43 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:23:06.433 11:41:43 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:23:06.433 11:41:43 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:23:06.433 11:41:43 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:23:06.433 11:41:43 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:06.433 11:41:43 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:23:06.433 bdev_null0 00:23:06.433 11:41:43 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:06.433 11:41:43 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:23:06.433 11:41:43 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:06.433 11:41:43 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:23:06.433 11:41:43 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:06.433 11:41:43 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:23:06.433 11:41:43 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:06.433 11:41:43 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:23:06.433 11:41:43 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:06.433 11:41:43 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:23:06.433 11:41:43 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:06.433 11:41:43 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:23:06.433 [2024-07-15 11:41:43.828643] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:06.433 11:41:43 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:06.433 11:41:43 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:23:06.433 11:41:43 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:23:06.433 11:41:43 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:23:06.433 11:41:43 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # config=() 00:23:06.433 11:41:43 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:23:06.433 11:41:43 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # local subsystem config 00:23:06.433 11:41:43 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:23:06.433 11:41:43 nvmf_dif.fio_dif_digest -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:06.433 11:41:43 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:23:06.433 11:41:43 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:06.433 { 00:23:06.433 "params": { 00:23:06.433 "name": "Nvme$subsystem", 00:23:06.433 "trtype": "$TEST_TRANSPORT", 00:23:06.433 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:06.433 "adrfam": "ipv4", 00:23:06.433 "trsvcid": "$NVMF_PORT", 00:23:06.433 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:06.433 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:06.433 "hdgst": ${hdgst:-false}, 00:23:06.433 "ddgst": ${ddgst:-false} 00:23:06.433 }, 00:23:06.433 "method": "bdev_nvme_attach_controller" 00:23:06.433 } 00:23:06.433 EOF 00:23:06.434 )") 00:23:06.434 11:41:43 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:23:06.434 11:41:43 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:23:06.434 11:41:43 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:23:06.434 11:41:43 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # local sanitizers 00:23:06.434 11:41:43 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:23:06.434 11:41:43 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:23:06.434 11:41:43 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # shift 00:23:06.434 11:41:43 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # local asan_lib= 00:23:06.434 11:41:43 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # cat 00:23:06.434 11:41:43 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:23:06.434 11:41:43 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:23:06.434 11:41:43 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:23:06.434 11:41:43 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:23:06.434 11:41:43 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:23:06.434 11:41:43 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # grep libasan 00:23:06.434 11:41:43 nvmf_dif.fio_dif_digest -- nvmf/common.sh@556 -- # jq . 00:23:06.434 11:41:43 nvmf_dif.fio_dif_digest -- nvmf/common.sh@557 -- # IFS=, 00:23:06.434 11:41:43 nvmf_dif.fio_dif_digest -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:23:06.434 "params": { 00:23:06.434 "name": "Nvme0", 00:23:06.434 "trtype": "tcp", 00:23:06.434 "traddr": "10.0.0.2", 00:23:06.434 "adrfam": "ipv4", 00:23:06.434 "trsvcid": "4420", 00:23:06.434 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:23:06.434 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:23:06.434 "hdgst": true, 00:23:06.434 "ddgst": true 00:23:06.434 }, 00:23:06.434 "method": "bdev_nvme_attach_controller" 00:23:06.434 }' 00:23:06.434 11:41:43 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # asan_lib= 00:23:06.434 11:41:43 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:23:06.434 11:41:43 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:23:06.434 11:41:43 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:23:06.434 11:41:43 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:23:06.434 11:41:43 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:23:06.434 11:41:43 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # asan_lib= 00:23:06.434 11:41:43 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:23:06.434 11:41:43 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:23:06.434 11:41:43 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:23:06.691 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:23:06.691 ... 00:23:06.691 fio-3.35 00:23:06.692 Starting 3 threads 00:23:18.881 00:23:18.881 filename0: (groupid=0, jobs=1): err= 0: pid=98568: Mon Jul 15 11:41:54 2024 00:23:18.881 read: IOPS=196, BW=24.6MiB/s (25.8MB/s)(246MiB/10005msec) 00:23:18.881 slat (nsec): min=4796, max=65043, avg=18393.42, stdev=7163.78 00:23:18.881 clat (usec): min=7028, max=56226, avg=15241.01, stdev=7580.17 00:23:18.881 lat (usec): min=7050, max=56234, avg=15259.41, stdev=7581.57 00:23:18.881 clat percentiles (usec): 00:23:18.881 | 1.00th=[10814], 5.00th=[11600], 10.00th=[11994], 20.00th=[12387], 00:23:18.881 | 30.00th=[12780], 40.00th=[13042], 50.00th=[13304], 60.00th=[13566], 00:23:18.881 | 70.00th=[13960], 80.00th=[14484], 90.00th=[16581], 95.00th=[27657], 00:23:18.881 | 99.00th=[50070], 99.50th=[51119], 99.90th=[55313], 99.95th=[56361], 00:23:18.881 | 99.99th=[56361] 00:23:18.881 bw ( KiB/s): min= 8192, max=30976, per=37.66%, avg=24818.53, stdev=7005.38, samples=19 00:23:18.881 iops : min= 64, max= 242, avg=193.89, stdev=54.73, samples=19 00:23:18.881 lat (msec) : 10=0.10%, 20=93.85%, 50=5.04%, 100=1.02% 00:23:18.881 cpu : usr=91.91%, sys=6.30%, ctx=8, majf=0, minf=0 00:23:18.881 IO depths : 1=1.1%, 2=98.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:23:18.881 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:18.881 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:18.881 issued rwts: total=1966,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:18.881 latency : target=0, window=0, percentile=100.00%, depth=3 00:23:18.881 filename0: (groupid=0, jobs=1): err= 0: pid=98569: Mon Jul 15 11:41:54 2024 00:23:18.881 read: IOPS=146, BW=18.3MiB/s (19.2MB/s)(183MiB/10005msec) 00:23:18.881 slat (nsec): min=8283, max=58329, avg=18323.05, stdev=6872.01 00:23:18.881 clat (usec): min=7951, max=68802, avg=20474.54, stdev=8545.71 00:23:18.881 lat (usec): min=7979, max=68828, avg=20492.87, stdev=8546.15 00:23:18.881 clat percentiles (usec): 00:23:18.881 | 1.00th=[14615], 5.00th=[15795], 10.00th=[16319], 20.00th=[17171], 00:23:18.881 | 30.00th=[17433], 40.00th=[17957], 50.00th=[18220], 60.00th=[18744], 00:23:18.881 | 70.00th=[19268], 80.00th=[19792], 90.00th=[22676], 95.00th=[49021], 00:23:18.881 | 99.00th=[56886], 99.50th=[59507], 99.90th=[67634], 99.95th=[68682], 00:23:18.881 | 99.99th=[68682] 00:23:18.881 bw ( KiB/s): min= 6912, max=23808, per=28.17%, avg=18566.74, stdev=4792.64, samples=19 00:23:18.881 iops : min= 54, max= 186, avg=145.05, stdev=37.44, samples=19 00:23:18.881 lat (msec) : 10=0.07%, 20=80.60%, 50=14.96%, 100=4.37% 00:23:18.881 cpu : usr=92.31%, sys=6.21%, ctx=18, majf=0, minf=9 00:23:18.881 IO depths : 1=1.9%, 2=98.1%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:23:18.881 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:18.881 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:18.881 issued rwts: total=1464,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:18.881 latency : target=0, window=0, percentile=100.00%, depth=3 00:23:18.881 filename0: (groupid=0, jobs=1): err= 0: pid=98570: Mon Jul 15 11:41:54 2024 00:23:18.881 read: IOPS=172, BW=21.5MiB/s (22.5MB/s)(215MiB/10004msec) 00:23:18.881 slat (nsec): min=4844, max=64064, avg=18819.98, stdev=8688.08 00:23:18.881 clat (usec): min=8598, max=61684, avg=17409.39, stdev=7811.44 00:23:18.881 lat (usec): min=8618, max=61746, avg=17428.21, stdev=7814.54 00:23:18.881 clat percentiles (usec): 00:23:18.881 | 1.00th=[12256], 5.00th=[13173], 10.00th=[13566], 20.00th=[14222], 00:23:18.881 | 30.00th=[14615], 40.00th=[15008], 50.00th=[15533], 60.00th=[15926], 00:23:18.881 | 70.00th=[16450], 80.00th=[17433], 90.00th=[19006], 95.00th=[40633], 00:23:18.881 | 99.00th=[52691], 99.50th=[55313], 99.90th=[61080], 99.95th=[61604], 00:23:18.881 | 99.99th=[61604] 00:23:18.881 bw ( KiB/s): min= 7680, max=26112, per=33.35%, avg=21975.58, stdev=5790.90, samples=19 00:23:18.881 iops : min= 60, max= 204, avg=171.68, stdev=45.24, samples=19 00:23:18.882 lat (msec) : 10=0.35%, 20=91.63%, 50=6.22%, 100=1.80% 00:23:18.882 cpu : usr=92.15%, sys=6.14%, ctx=9, majf=0, minf=9 00:23:18.882 IO depths : 1=5.1%, 2=94.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:23:18.882 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:18.882 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:18.882 issued rwts: total=1721,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:18.882 latency : target=0, window=0, percentile=100.00%, depth=3 00:23:18.882 00:23:18.882 Run status group 0 (all jobs): 00:23:18.882 READ: bw=64.4MiB/s (67.5MB/s), 18.3MiB/s-24.6MiB/s (19.2MB/s-25.8MB/s), io=644MiB (675MB), run=10004-10005msec 00:23:18.882 11:41:54 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:23:18.882 11:41:54 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:23:18.882 11:41:54 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:23:18.882 11:41:54 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:23:18.882 11:41:54 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:23:18.882 11:41:54 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:23:18.882 11:41:54 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:18.882 11:41:54 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:23:18.882 11:41:54 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:18.882 11:41:54 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:23:18.882 11:41:54 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:18.882 11:41:54 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:23:18.882 ************************************ 00:23:18.882 END TEST fio_dif_digest 00:23:18.882 ************************************ 00:23:18.882 11:41:54 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:18.882 00:23:18.882 real 0m10.929s 00:23:18.882 user 0m28.257s 00:23:18.882 sys 0m2.096s 00:23:18.882 11:41:54 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1124 -- # xtrace_disable 00:23:18.882 11:41:54 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:23:18.882 11:41:54 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:23:18.882 11:41:54 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:23:18.882 11:41:54 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:23:18.882 11:41:54 nvmf_dif -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:18.882 11:41:54 nvmf_dif -- nvmf/common.sh@117 -- # sync 00:23:18.882 11:41:54 nvmf_dif -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:18.882 11:41:54 nvmf_dif -- nvmf/common.sh@120 -- # set +e 00:23:18.882 11:41:54 nvmf_dif -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:18.882 11:41:54 nvmf_dif -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:18.882 rmmod nvme_tcp 00:23:18.882 rmmod nvme_fabrics 00:23:18.882 rmmod nvme_keyring 00:23:18.882 11:41:54 nvmf_dif -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:18.882 11:41:54 nvmf_dif -- nvmf/common.sh@124 -- # set -e 00:23:18.882 11:41:54 nvmf_dif -- nvmf/common.sh@125 -- # return 0 00:23:18.882 11:41:54 nvmf_dif -- nvmf/common.sh@489 -- # '[' -n 97806 ']' 00:23:18.882 11:41:54 nvmf_dif -- nvmf/common.sh@490 -- # killprocess 97806 00:23:18.882 11:41:54 nvmf_dif -- common/autotest_common.sh@948 -- # '[' -z 97806 ']' 00:23:18.882 11:41:54 nvmf_dif -- common/autotest_common.sh@952 -- # kill -0 97806 00:23:18.882 11:41:54 nvmf_dif -- common/autotest_common.sh@953 -- # uname 00:23:18.882 11:41:54 nvmf_dif -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:18.882 11:41:54 nvmf_dif -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 97806 00:23:18.882 killing process with pid 97806 00:23:18.882 11:41:54 nvmf_dif -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:23:18.882 11:41:54 nvmf_dif -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:23:18.882 11:41:54 nvmf_dif -- common/autotest_common.sh@966 -- # echo 'killing process with pid 97806' 00:23:18.882 11:41:54 nvmf_dif -- common/autotest_common.sh@967 -- # kill 97806 00:23:18.882 11:41:54 nvmf_dif -- common/autotest_common.sh@972 -- # wait 97806 00:23:18.882 11:41:55 nvmf_dif -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:23:18.882 11:41:55 nvmf_dif -- nvmf/common.sh@493 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:23:18.882 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:23:18.882 Waiting for block devices as requested 00:23:18.882 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:23:18.882 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:23:18.882 11:41:55 nvmf_dif -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:18.882 11:41:55 nvmf_dif -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:18.882 11:41:55 nvmf_dif -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:18.882 11:41:55 nvmf_dif -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:18.882 11:41:55 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:18.882 11:41:55 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:23:18.882 11:41:55 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:18.882 11:41:55 nvmf_dif -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:23:18.882 00:23:18.882 real 1m2.164s 00:23:18.882 user 4m14.627s 00:23:18.882 sys 0m16.574s 00:23:18.882 11:41:55 nvmf_dif -- common/autotest_common.sh@1124 -- # xtrace_disable 00:23:18.882 11:41:55 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:23:18.882 ************************************ 00:23:18.882 END TEST nvmf_dif 00:23:18.882 ************************************ 00:23:18.882 11:41:55 -- common/autotest_common.sh@1142 -- # return 0 00:23:18.882 11:41:55 -- spdk/autotest.sh@293 -- # run_test nvmf_abort_qd_sizes /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort_qd_sizes.sh 00:23:18.882 11:41:55 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:23:18.882 11:41:55 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:18.882 11:41:55 -- common/autotest_common.sh@10 -- # set +x 00:23:18.882 ************************************ 00:23:18.882 START TEST nvmf_abort_qd_sizes 00:23:18.882 ************************************ 00:23:18.882 11:41:55 nvmf_abort_qd_sizes -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort_qd_sizes.sh 00:23:18.882 * Looking for test storage... 00:23:18.882 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:23:18.882 11:41:55 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:23:18.882 11:41:55 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:23:18.882 11:41:55 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:18.882 11:41:55 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:18.882 11:41:55 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:18.882 11:41:55 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:18.882 11:41:55 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:18.882 11:41:55 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:18.882 11:41:55 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:18.882 11:41:55 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:18.882 11:41:55 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:18.882 11:41:55 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:18.882 11:41:55 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:891080d4-f96c-4735-b9e2-e3ce9892e421 00:23:18.882 11:41:55 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=891080d4-f96c-4735-b9e2-e3ce9892e421 00:23:18.882 11:41:55 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:18.882 11:41:55 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:18.882 11:41:55 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:23:18.882 11:41:55 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:18.882 11:41:55 nvmf_abort_qd_sizes -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:23:18.882 11:41:55 nvmf_abort_qd_sizes -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:18.882 11:41:55 nvmf_abort_qd_sizes -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:18.882 11:41:55 nvmf_abort_qd_sizes -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:18.882 11:41:55 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:18.882 11:41:55 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:18.882 11:41:55 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:18.882 11:41:55 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:23:18.882 11:41:55 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:18.882 11:41:55 nvmf_abort_qd_sizes -- nvmf/common.sh@47 -- # : 0 00:23:18.882 11:41:55 nvmf_abort_qd_sizes -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:18.882 11:41:55 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:18.882 11:41:55 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:18.882 11:41:55 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:18.882 11:41:55 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:18.882 11:41:55 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:18.882 11:41:55 nvmf_abort_qd_sizes -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:18.882 11:41:55 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:18.882 11:41:55 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:23:18.882 11:41:55 nvmf_abort_qd_sizes -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:18.882 11:41:55 nvmf_abort_qd_sizes -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:18.882 11:41:55 nvmf_abort_qd_sizes -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:18.882 11:41:55 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:18.882 11:41:55 nvmf_abort_qd_sizes -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:18.882 11:41:55 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:18.882 11:41:55 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:23:18.882 11:41:55 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:18.882 11:41:55 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:23:18.882 11:41:55 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:23:18.883 11:41:55 nvmf_abort_qd_sizes -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:23:18.883 11:41:55 nvmf_abort_qd_sizes -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:23:18.883 11:41:55 nvmf_abort_qd_sizes -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:23:18.883 11:41:55 nvmf_abort_qd_sizes -- nvmf/common.sh@432 -- # nvmf_veth_init 00:23:18.883 11:41:55 nvmf_abort_qd_sizes -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:18.883 11:41:55 nvmf_abort_qd_sizes -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:18.883 11:41:55 nvmf_abort_qd_sizes -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:23:18.883 11:41:55 nvmf_abort_qd_sizes -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:23:18.883 11:41:55 nvmf_abort_qd_sizes -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:23:18.883 11:41:55 nvmf_abort_qd_sizes -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:23:18.883 11:41:55 nvmf_abort_qd_sizes -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:23:18.883 11:41:55 nvmf_abort_qd_sizes -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:18.883 11:41:55 nvmf_abort_qd_sizes -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:23:18.883 11:41:55 nvmf_abort_qd_sizes -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:23:18.883 11:41:55 nvmf_abort_qd_sizes -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:23:18.883 11:41:55 nvmf_abort_qd_sizes -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:23:18.883 11:41:55 nvmf_abort_qd_sizes -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:23:18.883 11:41:55 nvmf_abort_qd_sizes -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:23:18.883 Cannot find device "nvmf_tgt_br" 00:23:18.883 11:41:55 nvmf_abort_qd_sizes -- nvmf/common.sh@155 -- # true 00:23:18.883 11:41:55 nvmf_abort_qd_sizes -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:23:18.883 Cannot find device "nvmf_tgt_br2" 00:23:18.883 11:41:55 nvmf_abort_qd_sizes -- nvmf/common.sh@156 -- # true 00:23:18.883 11:41:55 nvmf_abort_qd_sizes -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:23:18.883 11:41:55 nvmf_abort_qd_sizes -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:23:18.883 Cannot find device "nvmf_tgt_br" 00:23:18.883 11:41:55 nvmf_abort_qd_sizes -- nvmf/common.sh@158 -- # true 00:23:18.883 11:41:55 nvmf_abort_qd_sizes -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:23:18.883 Cannot find device "nvmf_tgt_br2" 00:23:18.883 11:41:55 nvmf_abort_qd_sizes -- nvmf/common.sh@159 -- # true 00:23:18.883 11:41:55 nvmf_abort_qd_sizes -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:23:18.883 11:41:55 nvmf_abort_qd_sizes -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:23:18.883 11:41:55 nvmf_abort_qd_sizes -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:23:18.883 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:18.883 11:41:55 nvmf_abort_qd_sizes -- nvmf/common.sh@162 -- # true 00:23:18.883 11:41:55 nvmf_abort_qd_sizes -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:23:18.883 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:18.883 11:41:55 nvmf_abort_qd_sizes -- nvmf/common.sh@163 -- # true 00:23:18.883 11:41:55 nvmf_abort_qd_sizes -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:23:18.883 11:41:55 nvmf_abort_qd_sizes -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:23:18.883 11:41:55 nvmf_abort_qd_sizes -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:23:18.883 11:41:55 nvmf_abort_qd_sizes -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:23:18.883 11:41:55 nvmf_abort_qd_sizes -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:23:18.883 11:41:55 nvmf_abort_qd_sizes -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:23:18.883 11:41:55 nvmf_abort_qd_sizes -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:23:18.883 11:41:55 nvmf_abort_qd_sizes -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:23:18.883 11:41:55 nvmf_abort_qd_sizes -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:23:18.883 11:41:55 nvmf_abort_qd_sizes -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:23:18.883 11:41:55 nvmf_abort_qd_sizes -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:23:18.883 11:41:55 nvmf_abort_qd_sizes -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:23:18.883 11:41:56 nvmf_abort_qd_sizes -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:23:18.883 11:41:56 nvmf_abort_qd_sizes -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:23:18.883 11:41:56 nvmf_abort_qd_sizes -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:23:18.883 11:41:56 nvmf_abort_qd_sizes -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:23:18.883 11:41:56 nvmf_abort_qd_sizes -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:23:18.883 11:41:56 nvmf_abort_qd_sizes -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:23:18.883 11:41:56 nvmf_abort_qd_sizes -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:23:18.883 11:41:56 nvmf_abort_qd_sizes -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:23:18.883 11:41:56 nvmf_abort_qd_sizes -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:23:18.883 11:41:56 nvmf_abort_qd_sizes -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:23:18.883 11:41:56 nvmf_abort_qd_sizes -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:23:18.883 11:41:56 nvmf_abort_qd_sizes -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:23:18.883 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:18.883 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.125 ms 00:23:18.883 00:23:18.883 --- 10.0.0.2 ping statistics --- 00:23:18.883 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:18.883 rtt min/avg/max/mdev = 0.125/0.125/0.125/0.000 ms 00:23:18.883 11:41:56 nvmf_abort_qd_sizes -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:23:18.883 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:23:18.883 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.068 ms 00:23:18.883 00:23:18.883 --- 10.0.0.3 ping statistics --- 00:23:18.883 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:18.883 rtt min/avg/max/mdev = 0.068/0.068/0.068/0.000 ms 00:23:18.883 11:41:56 nvmf_abort_qd_sizes -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:23:18.883 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:18.883 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.033 ms 00:23:18.883 00:23:18.883 --- 10.0.0.1 ping statistics --- 00:23:18.883 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:18.883 rtt min/avg/max/mdev = 0.033/0.033/0.033/0.000 ms 00:23:18.883 11:41:56 nvmf_abort_qd_sizes -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:18.883 11:41:56 nvmf_abort_qd_sizes -- nvmf/common.sh@433 -- # return 0 00:23:18.883 11:41:56 nvmf_abort_qd_sizes -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:23:18.883 11:41:56 nvmf_abort_qd_sizes -- nvmf/common.sh@451 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:23:19.449 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:23:19.449 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:23:19.449 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:23:19.449 11:41:56 nvmf_abort_qd_sizes -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:19.449 11:41:56 nvmf_abort_qd_sizes -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:19.449 11:41:56 nvmf_abort_qd_sizes -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:19.449 11:41:56 nvmf_abort_qd_sizes -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:19.449 11:41:56 nvmf_abort_qd_sizes -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:19.449 11:41:56 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:19.706 11:41:56 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:23:19.706 11:41:56 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:19.706 11:41:56 nvmf_abort_qd_sizes -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:19.706 11:41:56 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:23:19.706 11:41:56 nvmf_abort_qd_sizes -- nvmf/common.sh@481 -- # nvmfpid=99153 00:23:19.706 11:41:56 nvmf_abort_qd_sizes -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:23:19.706 11:41:56 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # waitforlisten 99153 00:23:19.706 11:41:56 nvmf_abort_qd_sizes -- common/autotest_common.sh@829 -- # '[' -z 99153 ']' 00:23:19.706 11:41:56 nvmf_abort_qd_sizes -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:19.706 11:41:56 nvmf_abort_qd_sizes -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:19.706 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:19.706 11:41:56 nvmf_abort_qd_sizes -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:19.706 11:41:56 nvmf_abort_qd_sizes -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:19.706 11:41:56 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:23:19.706 [2024-07-15 11:41:56.991050] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:23:19.706 [2024-07-15 11:41:56.991162] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:19.706 [2024-07-15 11:41:57.127625] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:19.964 [2024-07-15 11:41:57.194195] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:19.964 [2024-07-15 11:41:57.194264] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:19.964 [2024-07-15 11:41:57.194277] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:19.964 [2024-07-15 11:41:57.194286] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:19.964 [2024-07-15 11:41:57.194294] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:19.964 [2024-07-15 11:41:57.194428] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:19.964 [2024-07-15 11:41:57.194515] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:23:19.964 [2024-07-15 11:41:57.194955] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:23:19.964 [2024-07-15 11:41:57.194972] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:19.964 11:41:57 nvmf_abort_qd_sizes -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:19.964 11:41:57 nvmf_abort_qd_sizes -- common/autotest_common.sh@862 -- # return 0 00:23:19.964 11:41:57 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:19.964 11:41:57 nvmf_abort_qd_sizes -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:19.964 11:41:57 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:23:19.964 11:41:57 nvmf_abort_qd_sizes -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:19.964 11:41:57 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:23:19.964 11:41:57 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:23:19.964 11:41:57 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:23:19.964 11:41:57 nvmf_abort_qd_sizes -- scripts/common.sh@309 -- # local bdf bdfs 00:23:19.964 11:41:57 nvmf_abort_qd_sizes -- scripts/common.sh@310 -- # local nvmes 00:23:19.964 11:41:57 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # [[ -n '' ]] 00:23:19.964 11:41:57 nvmf_abort_qd_sizes -- scripts/common.sh@315 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:23:19.964 11:41:57 nvmf_abort_qd_sizes -- scripts/common.sh@315 -- # iter_pci_class_code 01 08 02 00:23:19.964 11:41:57 nvmf_abort_qd_sizes -- scripts/common.sh@295 -- # local bdf= 00:23:19.964 11:41:57 nvmf_abort_qd_sizes -- scripts/common.sh@297 -- # iter_all_pci_class_code 01 08 02 00:23:19.964 11:41:57 nvmf_abort_qd_sizes -- scripts/common.sh@230 -- # local class 00:23:19.964 11:41:57 nvmf_abort_qd_sizes -- scripts/common.sh@231 -- # local subclass 00:23:19.964 11:41:57 nvmf_abort_qd_sizes -- scripts/common.sh@232 -- # local progif 00:23:19.964 11:41:57 nvmf_abort_qd_sizes -- scripts/common.sh@233 -- # printf %02x 1 00:23:19.964 11:41:57 nvmf_abort_qd_sizes -- scripts/common.sh@233 -- # class=01 00:23:19.964 11:41:57 nvmf_abort_qd_sizes -- scripts/common.sh@234 -- # printf %02x 8 00:23:19.964 11:41:57 nvmf_abort_qd_sizes -- scripts/common.sh@234 -- # subclass=08 00:23:19.964 11:41:57 nvmf_abort_qd_sizes -- scripts/common.sh@235 -- # printf %02x 2 00:23:19.964 11:41:57 nvmf_abort_qd_sizes -- scripts/common.sh@235 -- # progif=02 00:23:19.964 11:41:57 nvmf_abort_qd_sizes -- scripts/common.sh@237 -- # hash lspci 00:23:19.964 11:41:57 nvmf_abort_qd_sizes -- scripts/common.sh@238 -- # '[' 02 '!=' 00 ']' 00:23:19.964 11:41:57 nvmf_abort_qd_sizes -- scripts/common.sh@239 -- # lspci -mm -n -D 00:23:19.964 11:41:57 nvmf_abort_qd_sizes -- scripts/common.sh@240 -- # grep -i -- -p02 00:23:19.964 11:41:57 nvmf_abort_qd_sizes -- scripts/common.sh@241 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:23:19.964 11:41:57 nvmf_abort_qd_sizes -- scripts/common.sh@242 -- # tr -d '"' 00:23:19.964 11:41:57 nvmf_abort_qd_sizes -- scripts/common.sh@297 -- # for bdf in $(iter_all_pci_class_code "$@") 00:23:19.964 11:41:57 nvmf_abort_qd_sizes -- scripts/common.sh@298 -- # pci_can_use 0000:00:10.0 00:23:19.964 11:41:57 nvmf_abort_qd_sizes -- scripts/common.sh@15 -- # local i 00:23:19.964 11:41:57 nvmf_abort_qd_sizes -- scripts/common.sh@18 -- # [[ =~ 0000:00:10.0 ]] 00:23:19.964 11:41:57 nvmf_abort_qd_sizes -- scripts/common.sh@22 -- # [[ -z '' ]] 00:23:19.964 11:41:57 nvmf_abort_qd_sizes -- scripts/common.sh@24 -- # return 0 00:23:19.964 11:41:57 nvmf_abort_qd_sizes -- scripts/common.sh@299 -- # echo 0000:00:10.0 00:23:19.964 11:41:57 nvmf_abort_qd_sizes -- scripts/common.sh@297 -- # for bdf in $(iter_all_pci_class_code "$@") 00:23:19.964 11:41:57 nvmf_abort_qd_sizes -- scripts/common.sh@298 -- # pci_can_use 0000:00:11.0 00:23:19.964 11:41:57 nvmf_abort_qd_sizes -- scripts/common.sh@15 -- # local i 00:23:19.964 11:41:57 nvmf_abort_qd_sizes -- scripts/common.sh@18 -- # [[ =~ 0000:00:11.0 ]] 00:23:19.964 11:41:57 nvmf_abort_qd_sizes -- scripts/common.sh@22 -- # [[ -z '' ]] 00:23:19.964 11:41:57 nvmf_abort_qd_sizes -- scripts/common.sh@24 -- # return 0 00:23:19.964 11:41:57 nvmf_abort_qd_sizes -- scripts/common.sh@299 -- # echo 0000:00:11.0 00:23:19.964 11:41:57 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:23:19.964 11:41:57 nvmf_abort_qd_sizes -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:23:19.964 11:41:57 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # uname -s 00:23:19.964 11:41:57 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:23:19.964 11:41:57 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:23:19.964 11:41:57 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:23:19.964 11:41:57 nvmf_abort_qd_sizes -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:11.0 ]] 00:23:19.964 11:41:57 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # uname -s 00:23:19.964 11:41:57 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:23:19.964 11:41:57 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:23:19.964 11:41:57 nvmf_abort_qd_sizes -- scripts/common.sh@325 -- # (( 2 )) 00:23:19.964 11:41:57 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:23:19.964 11:41:57 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 2 > 0 )) 00:23:19.964 11:41:57 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:00:10.0 00:23:19.964 11:41:57 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:23:19.964 11:41:57 nvmf_abort_qd_sizes -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:23:19.964 11:41:57 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:19.964 11:41:57 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:23:19.964 ************************************ 00:23:19.964 START TEST spdk_target_abort 00:23:19.964 ************************************ 00:23:19.964 11:41:57 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1123 -- # spdk_target 00:23:19.964 11:41:57 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:23:19.964 11:41:57 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:00:10.0 -b spdk_target 00:23:19.964 11:41:57 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:19.964 11:41:57 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:23:19.964 spdk_targetn1 00:23:19.964 11:41:57 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:19.964 11:41:57 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:19.964 11:41:57 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:19.964 11:41:57 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:23:19.964 [2024-07-15 11:41:57.435111] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:20.222 11:41:57 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:20.222 11:41:57 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:23:20.222 11:41:57 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:20.222 11:41:57 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:23:20.222 11:41:57 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:20.222 11:41:57 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:23:20.222 11:41:57 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:20.222 11:41:57 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:23:20.222 11:41:57 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:20.222 11:41:57 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:23:20.222 11:41:57 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:20.222 11:41:57 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:23:20.222 [2024-07-15 11:41:57.463349] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:20.222 11:41:57 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:20.222 11:41:57 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:23:20.222 11:41:57 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:23:20.222 11:41:57 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:23:20.222 11:41:57 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:23:20.222 11:41:57 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:23:20.222 11:41:57 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:23:20.222 11:41:57 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:23:20.222 11:41:57 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:23:20.222 11:41:57 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:23:20.222 11:41:57 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:23:20.222 11:41:57 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:23:20.222 11:41:57 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:23:20.222 11:41:57 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:23:20.222 11:41:57 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:23:20.222 11:41:57 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:23:20.222 11:41:57 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:23:20.222 11:41:57 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:23:20.222 11:41:57 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:23:20.222 11:41:57 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:23:20.222 11:41:57 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:23:20.222 11:41:57 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:23:23.501 Initializing NVMe Controllers 00:23:23.501 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:23:23.501 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:23:23.501 Initialization complete. Launching workers. 00:23:23.501 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 10324, failed: 0 00:23:23.501 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1074, failed to submit 9250 00:23:23.501 success 811, unsuccess 263, failed 0 00:23:23.501 11:42:00 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:23:23.501 11:42:00 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:23:26.795 Initializing NVMe Controllers 00:23:26.795 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:23:26.795 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:23:26.795 Initialization complete. Launching workers. 00:23:26.795 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 5975, failed: 0 00:23:26.795 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1255, failed to submit 4720 00:23:26.795 success 256, unsuccess 999, failed 0 00:23:26.795 11:42:04 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:23:26.795 11:42:04 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:23:30.081 Initializing NVMe Controllers 00:23:30.081 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:23:30.081 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:23:30.081 Initialization complete. Launching workers. 00:23:30.081 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 28260, failed: 0 00:23:30.081 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2589, failed to submit 25671 00:23:30.081 success 322, unsuccess 2267, failed 0 00:23:30.081 11:42:07 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:23:30.081 11:42:07 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:30.081 11:42:07 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:23:30.081 11:42:07 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:30.081 11:42:07 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:23:30.081 11:42:07 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:30.081 11:42:07 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:23:31.031 11:42:08 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:31.031 11:42:08 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 99153 00:23:31.031 11:42:08 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@948 -- # '[' -z 99153 ']' 00:23:31.031 11:42:08 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@952 -- # kill -0 99153 00:23:31.031 11:42:08 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@953 -- # uname 00:23:31.031 11:42:08 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:31.031 11:42:08 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 99153 00:23:31.031 11:42:08 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:23:31.031 11:42:08 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:23:31.031 killing process with pid 99153 00:23:31.031 11:42:08 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@966 -- # echo 'killing process with pid 99153' 00:23:31.031 11:42:08 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@967 -- # kill 99153 00:23:31.031 11:42:08 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@972 -- # wait 99153 00:23:31.031 ************************************ 00:23:31.031 END TEST spdk_target_abort 00:23:31.031 ************************************ 00:23:31.031 00:23:31.031 real 0m11.101s 00:23:31.031 user 0m41.288s 00:23:31.031 sys 0m1.822s 00:23:31.031 11:42:08 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1124 -- # xtrace_disable 00:23:31.031 11:42:08 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:23:31.031 11:42:08 nvmf_abort_qd_sizes -- common/autotest_common.sh@1142 -- # return 0 00:23:31.031 11:42:08 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:23:31.031 11:42:08 nvmf_abort_qd_sizes -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:23:31.031 11:42:08 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:31.031 11:42:08 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:23:31.031 ************************************ 00:23:31.031 START TEST kernel_target_abort 00:23:31.031 ************************************ 00:23:31.031 11:42:08 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1123 -- # kernel_target 00:23:31.031 11:42:08 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:23:31.031 11:42:08 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@741 -- # local ip 00:23:31.031 11:42:08 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:31.031 11:42:08 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:31.031 11:42:08 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:31.031 11:42:08 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:31.031 11:42:08 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:31.031 11:42:08 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:31.031 11:42:08 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:31.031 11:42:08 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:31.031 11:42:08 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:31.031 11:42:08 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:23:31.031 11:42:08 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:23:31.031 11:42:08 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:23:31.031 11:42:08 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:23:31.031 11:42:08 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:23:31.031 11:42:08 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:23:31.031 11:42:08 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@639 -- # local block nvme 00:23:31.031 11:42:08 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:23:31.031 11:42:08 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@642 -- # modprobe nvmet 00:23:31.290 11:42:08 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:23:31.290 11:42:08 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@647 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:23:31.548 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:23:31.548 Waiting for block devices as requested 00:23:31.548 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:23:31.548 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:23:31.807 11:42:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:23:31.807 11:42:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:23:31.807 11:42:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:23:31.807 11:42:09 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:23:31.807 11:42:09 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:23:31.807 11:42:09 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:23:31.807 11:42:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:23:31.807 11:42:09 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:23:31.808 11:42:09 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:23:31.808 No valid GPT data, bailing 00:23:31.808 11:42:09 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:23:31.808 11:42:09 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # pt= 00:23:31.808 11:42:09 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@392 -- # return 1 00:23:31.808 11:42:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:23:31.808 11:42:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:23:31.808 11:42:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n2 ]] 00:23:31.808 11:42:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@652 -- # is_block_zoned nvme0n2 00:23:31.808 11:42:09 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1662 -- # local device=nvme0n2 00:23:31.808 11:42:09 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:23:31.808 11:42:09 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:23:31.808 11:42:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # block_in_use nvme0n2 00:23:31.808 11:42:09 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@378 -- # local block=nvme0n2 pt 00:23:31.808 11:42:09 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:23:31.808 No valid GPT data, bailing 00:23:31.808 11:42:09 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:23:31.808 11:42:09 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # pt= 00:23:31.808 11:42:09 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@392 -- # return 1 00:23:31.808 11:42:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n2 00:23:31.808 11:42:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:23:31.808 11:42:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n3 ]] 00:23:31.808 11:42:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@652 -- # is_block_zoned nvme0n3 00:23:31.808 11:42:09 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1662 -- # local device=nvme0n3 00:23:31.808 11:42:09 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:23:31.808 11:42:09 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:23:31.808 11:42:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # block_in_use nvme0n3 00:23:31.808 11:42:09 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@378 -- # local block=nvme0n3 pt 00:23:31.808 11:42:09 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:23:31.808 No valid GPT data, bailing 00:23:31.808 11:42:09 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:23:31.808 11:42:09 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # pt= 00:23:31.808 11:42:09 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@392 -- # return 1 00:23:31.808 11:42:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n3 00:23:31.808 11:42:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:23:31.808 11:42:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme1n1 ]] 00:23:32.066 11:42:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@652 -- # is_block_zoned nvme1n1 00:23:32.066 11:42:09 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1662 -- # local device=nvme1n1 00:23:32.066 11:42:09 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:23:32.066 11:42:09 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:23:32.066 11:42:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # block_in_use nvme1n1 00:23:32.066 11:42:09 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@378 -- # local block=nvme1n1 pt 00:23:32.066 11:42:09 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:23:32.066 No valid GPT data, bailing 00:23:32.066 11:42:09 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:23:32.066 11:42:09 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # pt= 00:23:32.066 11:42:09 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@392 -- # return 1 00:23:32.066 11:42:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # nvme=/dev/nvme1n1 00:23:32.066 11:42:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@656 -- # [[ -b /dev/nvme1n1 ]] 00:23:32.067 11:42:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:23:32.067 11:42:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:23:32.067 11:42:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:23:32.067 11:42:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:23:32.067 11:42:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # echo 1 00:23:32.067 11:42:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@668 -- # echo /dev/nvme1n1 00:23:32.067 11:42:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # echo 1 00:23:32.067 11:42:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:23:32.067 11:42:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@672 -- # echo tcp 00:23:32.067 11:42:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # echo 4420 00:23:32.067 11:42:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@674 -- # echo ipv4 00:23:32.067 11:42:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:23:32.067 11:42:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:891080d4-f96c-4735-b9e2-e3ce9892e421 --hostid=891080d4-f96c-4735-b9e2-e3ce9892e421 -a 10.0.0.1 -t tcp -s 4420 00:23:32.067 00:23:32.067 Discovery Log Number of Records 2, Generation counter 2 00:23:32.067 =====Discovery Log Entry 0====== 00:23:32.067 trtype: tcp 00:23:32.067 adrfam: ipv4 00:23:32.067 subtype: current discovery subsystem 00:23:32.067 treq: not specified, sq flow control disable supported 00:23:32.067 portid: 1 00:23:32.067 trsvcid: 4420 00:23:32.067 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:23:32.067 traddr: 10.0.0.1 00:23:32.067 eflags: none 00:23:32.067 sectype: none 00:23:32.067 =====Discovery Log Entry 1====== 00:23:32.067 trtype: tcp 00:23:32.067 adrfam: ipv4 00:23:32.067 subtype: nvme subsystem 00:23:32.067 treq: not specified, sq flow control disable supported 00:23:32.067 portid: 1 00:23:32.067 trsvcid: 4420 00:23:32.067 subnqn: nqn.2016-06.io.spdk:testnqn 00:23:32.067 traddr: 10.0.0.1 00:23:32.067 eflags: none 00:23:32.067 sectype: none 00:23:32.067 11:42:09 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:23:32.067 11:42:09 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:23:32.067 11:42:09 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:23:32.067 11:42:09 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:23:32.067 11:42:09 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:23:32.067 11:42:09 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:23:32.067 11:42:09 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:23:32.067 11:42:09 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:23:32.067 11:42:09 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:23:32.067 11:42:09 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:23:32.067 11:42:09 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:23:32.067 11:42:09 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:23:32.067 11:42:09 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:23:32.067 11:42:09 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:23:32.067 11:42:09 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:23:32.067 11:42:09 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:23:32.067 11:42:09 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:23:32.067 11:42:09 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:23:32.067 11:42:09 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:23:32.067 11:42:09 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:23:32.067 11:42:09 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:23:35.360 Initializing NVMe Controllers 00:23:35.360 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:23:35.360 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:23:35.360 Initialization complete. Launching workers. 00:23:35.360 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 33003, failed: 0 00:23:35.360 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 33003, failed to submit 0 00:23:35.360 success 0, unsuccess 33003, failed 0 00:23:35.360 11:42:12 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:23:35.360 11:42:12 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:23:38.642 Initializing NVMe Controllers 00:23:38.642 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:23:38.642 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:23:38.642 Initialization complete. Launching workers. 00:23:38.642 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 63968, failed: 0 00:23:38.642 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 26273, failed to submit 37695 00:23:38.642 success 0, unsuccess 26273, failed 0 00:23:38.642 11:42:15 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:23:38.642 11:42:15 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:23:41.962 Initializing NVMe Controllers 00:23:41.962 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:23:41.962 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:23:41.962 Initialization complete. Launching workers. 00:23:41.962 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 67818, failed: 0 00:23:41.962 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 17008, failed to submit 50810 00:23:41.962 success 0, unsuccess 17008, failed 0 00:23:41.962 11:42:18 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:23:41.962 11:42:18 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:23:41.962 11:42:18 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # echo 0 00:23:41.962 11:42:18 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:23:41.962 11:42:18 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:23:41.962 11:42:18 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:23:41.962 11:42:18 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:23:41.962 11:42:18 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:23:41.962 11:42:18 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:23:41.962 11:42:18 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:23:42.219 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:23:43.592 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:23:43.592 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:23:43.849 00:23:43.849 real 0m12.608s 00:23:43.849 user 0m6.114s 00:23:43.849 sys 0m3.773s 00:23:43.849 11:42:21 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1124 -- # xtrace_disable 00:23:43.849 11:42:21 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:23:43.849 ************************************ 00:23:43.849 END TEST kernel_target_abort 00:23:43.849 ************************************ 00:23:43.849 11:42:21 nvmf_abort_qd_sizes -- common/autotest_common.sh@1142 -- # return 0 00:23:43.849 11:42:21 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:23:43.849 11:42:21 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:23:43.849 11:42:21 nvmf_abort_qd_sizes -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:43.849 11:42:21 nvmf_abort_qd_sizes -- nvmf/common.sh@117 -- # sync 00:23:43.849 11:42:21 nvmf_abort_qd_sizes -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:43.849 11:42:21 nvmf_abort_qd_sizes -- nvmf/common.sh@120 -- # set +e 00:23:43.849 11:42:21 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:43.849 11:42:21 nvmf_abort_qd_sizes -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:43.849 rmmod nvme_tcp 00:23:43.849 rmmod nvme_fabrics 00:23:43.849 rmmod nvme_keyring 00:23:43.849 11:42:21 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:43.849 Process with pid 99153 is not found 00:23:43.849 11:42:21 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set -e 00:23:43.849 11:42:21 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # return 0 00:23:43.849 11:42:21 nvmf_abort_qd_sizes -- nvmf/common.sh@489 -- # '[' -n 99153 ']' 00:23:43.849 11:42:21 nvmf_abort_qd_sizes -- nvmf/common.sh@490 -- # killprocess 99153 00:23:43.849 11:42:21 nvmf_abort_qd_sizes -- common/autotest_common.sh@948 -- # '[' -z 99153 ']' 00:23:43.849 11:42:21 nvmf_abort_qd_sizes -- common/autotest_common.sh@952 -- # kill -0 99153 00:23:43.849 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 952: kill: (99153) - No such process 00:23:43.849 11:42:21 nvmf_abort_qd_sizes -- common/autotest_common.sh@975 -- # echo 'Process with pid 99153 is not found' 00:23:43.849 11:42:21 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:23:43.849 11:42:21 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:23:44.106 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:23:44.106 Waiting for block devices as requested 00:23:44.106 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:23:44.364 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:23:44.364 11:42:21 nvmf_abort_qd_sizes -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:44.364 11:42:21 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:44.364 11:42:21 nvmf_abort_qd_sizes -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:44.364 11:42:21 nvmf_abort_qd_sizes -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:44.364 11:42:21 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:44.364 11:42:21 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:23:44.364 11:42:21 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:44.364 11:42:21 nvmf_abort_qd_sizes -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:23:44.364 ************************************ 00:23:44.364 END TEST nvmf_abort_qd_sizes 00:23:44.364 ************************************ 00:23:44.364 00:23:44.364 real 0m26.135s 00:23:44.364 user 0m48.291s 00:23:44.364 sys 0m6.796s 00:23:44.364 11:42:21 nvmf_abort_qd_sizes -- common/autotest_common.sh@1124 -- # xtrace_disable 00:23:44.364 11:42:21 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:23:44.364 11:42:21 -- common/autotest_common.sh@1142 -- # return 0 00:23:44.364 11:42:21 -- spdk/autotest.sh@295 -- # run_test keyring_file /home/vagrant/spdk_repo/spdk/test/keyring/file.sh 00:23:44.364 11:42:21 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:23:44.364 11:42:21 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:44.364 11:42:21 -- common/autotest_common.sh@10 -- # set +x 00:23:44.364 ************************************ 00:23:44.364 START TEST keyring_file 00:23:44.364 ************************************ 00:23:44.364 11:42:21 keyring_file -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/keyring/file.sh 00:23:44.621 * Looking for test storage... 00:23:44.621 * Found test storage at /home/vagrant/spdk_repo/spdk/test/keyring 00:23:44.621 11:42:21 keyring_file -- keyring/file.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/keyring/common.sh 00:23:44.621 11:42:21 keyring_file -- keyring/common.sh@4 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:23:44.621 11:42:21 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:23:44.621 11:42:21 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:44.621 11:42:21 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:44.621 11:42:21 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:44.621 11:42:21 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:44.621 11:42:21 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:44.621 11:42:21 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:44.621 11:42:21 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:44.621 11:42:21 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:44.621 11:42:21 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:44.621 11:42:21 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:44.621 11:42:21 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:891080d4-f96c-4735-b9e2-e3ce9892e421 00:23:44.622 11:42:21 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=891080d4-f96c-4735-b9e2-e3ce9892e421 00:23:44.622 11:42:21 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:44.622 11:42:21 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:44.622 11:42:21 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:23:44.622 11:42:21 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:44.622 11:42:21 keyring_file -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:23:44.622 11:42:21 keyring_file -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:44.622 11:42:21 keyring_file -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:44.622 11:42:21 keyring_file -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:44.622 11:42:21 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:44.622 11:42:21 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:44.622 11:42:21 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:44.622 11:42:21 keyring_file -- paths/export.sh@5 -- # export PATH 00:23:44.622 11:42:21 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:44.622 11:42:21 keyring_file -- nvmf/common.sh@47 -- # : 0 00:23:44.622 11:42:21 keyring_file -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:44.622 11:42:21 keyring_file -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:44.622 11:42:21 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:44.622 11:42:21 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:44.622 11:42:21 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:44.622 11:42:21 keyring_file -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:44.622 11:42:21 keyring_file -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:44.622 11:42:21 keyring_file -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:44.622 11:42:21 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:23:44.622 11:42:21 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:23:44.622 11:42:21 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:23:44.622 11:42:21 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:23:44.622 11:42:21 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:23:44.622 11:42:21 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:23:44.622 11:42:21 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:23:44.622 11:42:21 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:23:44.622 11:42:21 keyring_file -- keyring/common.sh@17 -- # name=key0 00:23:44.622 11:42:21 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:23:44.622 11:42:21 keyring_file -- keyring/common.sh@17 -- # digest=0 00:23:44.622 11:42:21 keyring_file -- keyring/common.sh@18 -- # mktemp 00:23:44.622 11:42:21 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.ae8W3wGPxR 00:23:44.622 11:42:21 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:23:44.622 11:42:21 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:23:44.622 11:42:21 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:23:44.622 11:42:21 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:23:44.622 11:42:21 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:23:44.622 11:42:21 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:23:44.622 11:42:21 keyring_file -- nvmf/common.sh@705 -- # python - 00:23:44.622 11:42:21 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.ae8W3wGPxR 00:23:44.622 11:42:21 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.ae8W3wGPxR 00:23:44.622 11:42:21 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.ae8W3wGPxR 00:23:44.622 11:42:21 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:23:44.622 11:42:21 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:23:44.622 11:42:21 keyring_file -- keyring/common.sh@17 -- # name=key1 00:23:44.622 11:42:21 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:23:44.622 11:42:21 keyring_file -- keyring/common.sh@17 -- # digest=0 00:23:44.622 11:42:21 keyring_file -- keyring/common.sh@18 -- # mktemp 00:23:44.622 11:42:21 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.mW5SiNsI0K 00:23:44.622 11:42:21 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:23:44.622 11:42:21 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:23:44.622 11:42:21 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:23:44.622 11:42:21 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:23:44.622 11:42:21 keyring_file -- nvmf/common.sh@704 -- # key=112233445566778899aabbccddeeff00 00:23:44.622 11:42:21 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:23:44.622 11:42:21 keyring_file -- nvmf/common.sh@705 -- # python - 00:23:44.622 11:42:22 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.mW5SiNsI0K 00:23:44.622 11:42:22 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.mW5SiNsI0K 00:23:44.622 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:44.622 11:42:22 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.mW5SiNsI0K 00:23:44.622 11:42:22 keyring_file -- keyring/file.sh@30 -- # tgtpid=100023 00:23:44.622 11:42:22 keyring_file -- keyring/file.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:23:44.622 11:42:22 keyring_file -- keyring/file.sh@32 -- # waitforlisten 100023 00:23:44.622 11:42:22 keyring_file -- common/autotest_common.sh@829 -- # '[' -z 100023 ']' 00:23:44.622 11:42:22 keyring_file -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:44.622 11:42:22 keyring_file -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:44.622 11:42:22 keyring_file -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:44.622 11:42:22 keyring_file -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:44.622 11:42:22 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:23:44.622 [2024-07-15 11:42:22.065449] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:23:44.622 [2024-07-15 11:42:22.065574] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid100023 ] 00:23:44.880 [2024-07-15 11:42:22.199428] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:44.880 [2024-07-15 11:42:22.258385] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:45.137 11:42:22 keyring_file -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:45.137 11:42:22 keyring_file -- common/autotest_common.sh@862 -- # return 0 00:23:45.138 11:42:22 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:23:45.138 11:42:22 keyring_file -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:45.138 11:42:22 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:23:45.138 [2024-07-15 11:42:22.431767] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:45.138 null0 00:23:45.138 [2024-07-15 11:42:22.463742] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:45.138 [2024-07-15 11:42:22.464273] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:23:45.138 [2024-07-15 11:42:22.471727] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:23:45.138 11:42:22 keyring_file -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:45.138 11:42:22 keyring_file -- keyring/file.sh@43 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:23:45.138 11:42:22 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:23:45.138 11:42:22 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:23:45.138 11:42:22 keyring_file -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:23:45.138 11:42:22 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:45.138 11:42:22 keyring_file -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:23:45.138 11:42:22 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:45.138 11:42:22 keyring_file -- common/autotest_common.sh@651 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:23:45.138 11:42:22 keyring_file -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:45.138 11:42:22 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:23:45.138 [2024-07-15 11:42:22.483746] nvmf_rpc.c: 783:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:23:45.138 2024/07/15 11:42:22 error on JSON-RPC call, method: nvmf_subsystem_add_listener, params: map[listen_address:map[traddr:127.0.0.1 trsvcid:4420 trtype:tcp] nqn:nqn.2016-06.io.spdk:cnode0 secure_channel:%!s(bool=false)], err: error received for nvmf_subsystem_add_listener method, err: Code=-32602 Msg=Invalid parameters 00:23:45.138 request: 00:23:45.138 { 00:23:45.138 "method": "nvmf_subsystem_add_listener", 00:23:45.138 "params": { 00:23:45.138 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:23:45.138 "secure_channel": false, 00:23:45.138 "listen_address": { 00:23:45.138 "trtype": "tcp", 00:23:45.138 "traddr": "127.0.0.1", 00:23:45.138 "trsvcid": "4420" 00:23:45.138 } 00:23:45.138 } 00:23:45.138 } 00:23:45.138 Got JSON-RPC error response 00:23:45.138 GoRPCClient: error on JSON-RPC call 00:23:45.138 11:42:22 keyring_file -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:23:45.138 11:42:22 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:23:45.138 11:42:22 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:23:45.138 11:42:22 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:23:45.138 11:42:22 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:23:45.138 11:42:22 keyring_file -- keyring/file.sh@46 -- # bperfpid=100043 00:23:45.138 11:42:22 keyring_file -- keyring/file.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:23:45.138 11:42:22 keyring_file -- keyring/file.sh@48 -- # waitforlisten 100043 /var/tmp/bperf.sock 00:23:45.138 11:42:22 keyring_file -- common/autotest_common.sh@829 -- # '[' -z 100043 ']' 00:23:45.138 11:42:22 keyring_file -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:23:45.138 11:42:22 keyring_file -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:45.138 11:42:22 keyring_file -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:23:45.138 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:23:45.138 11:42:22 keyring_file -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:45.138 11:42:22 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:23:45.138 [2024-07-15 11:42:22.557087] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:23:45.138 [2024-07-15 11:42:22.557340] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid100043 ] 00:23:45.395 [2024-07-15 11:42:22.692860] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:45.395 [2024-07-15 11:42:22.752923] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:46.343 11:42:23 keyring_file -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:46.343 11:42:23 keyring_file -- common/autotest_common.sh@862 -- # return 0 00:23:46.343 11:42:23 keyring_file -- keyring/file.sh@49 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.ae8W3wGPxR 00:23:46.343 11:42:23 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.ae8W3wGPxR 00:23:46.600 11:42:23 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.mW5SiNsI0K 00:23:46.600 11:42:23 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.mW5SiNsI0K 00:23:46.858 11:42:24 keyring_file -- keyring/file.sh@51 -- # get_key key0 00:23:46.858 11:42:24 keyring_file -- keyring/file.sh@51 -- # jq -r .path 00:23:46.858 11:42:24 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:23:46.858 11:42:24 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:23:46.858 11:42:24 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:23:47.424 11:42:24 keyring_file -- keyring/file.sh@51 -- # [[ /tmp/tmp.ae8W3wGPxR == \/\t\m\p\/\t\m\p\.\a\e\8\W\3\w\G\P\x\R ]] 00:23:47.424 11:42:24 keyring_file -- keyring/file.sh@52 -- # get_key key1 00:23:47.424 11:42:24 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:23:47.424 11:42:24 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:23:47.424 11:42:24 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:23:47.424 11:42:24 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:23:47.681 11:42:24 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.mW5SiNsI0K == \/\t\m\p\/\t\m\p\.\m\W\5\S\i\N\s\I\0\K ]] 00:23:47.681 11:42:24 keyring_file -- keyring/file.sh@53 -- # get_refcnt key0 00:23:47.681 11:42:24 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:23:47.681 11:42:24 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:23:47.681 11:42:24 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:23:47.681 11:42:24 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:23:47.681 11:42:24 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:23:47.939 11:42:25 keyring_file -- keyring/file.sh@53 -- # (( 1 == 1 )) 00:23:47.939 11:42:25 keyring_file -- keyring/file.sh@54 -- # get_refcnt key1 00:23:47.939 11:42:25 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:23:47.939 11:42:25 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:23:47.939 11:42:25 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:23:47.939 11:42:25 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:23:47.939 11:42:25 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:23:48.196 11:42:25 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:23:48.196 11:42:25 keyring_file -- keyring/file.sh@57 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:23:48.196 11:42:25 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:23:48.761 [2024-07-15 11:42:26.052785] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:48.761 nvme0n1 00:23:48.761 11:42:26 keyring_file -- keyring/file.sh@59 -- # get_refcnt key0 00:23:48.762 11:42:26 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:23:48.762 11:42:26 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:23:48.762 11:42:26 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:23:48.762 11:42:26 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:23:48.762 11:42:26 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:23:49.328 11:42:26 keyring_file -- keyring/file.sh@59 -- # (( 2 == 2 )) 00:23:49.328 11:42:26 keyring_file -- keyring/file.sh@60 -- # get_refcnt key1 00:23:49.328 11:42:26 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:23:49.328 11:42:26 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:23:49.328 11:42:26 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:23:49.328 11:42:26 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:23:49.328 11:42:26 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:23:49.585 11:42:26 keyring_file -- keyring/file.sh@60 -- # (( 1 == 1 )) 00:23:49.585 11:42:26 keyring_file -- keyring/file.sh@62 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:23:49.843 Running I/O for 1 seconds... 00:23:50.775 00:23:50.775 Latency(us) 00:23:50.775 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:50.775 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:23:50.775 nvme0n1 : 1.01 10190.46 39.81 0.00 0.00 12511.43 7477.06 22043.93 00:23:50.775 =================================================================================================================== 00:23:50.775 Total : 10190.46 39.81 0.00 0.00 12511.43 7477.06 22043.93 00:23:50.775 0 00:23:50.775 11:42:28 keyring_file -- keyring/file.sh@64 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:23:50.775 11:42:28 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:23:51.033 11:42:28 keyring_file -- keyring/file.sh@65 -- # get_refcnt key0 00:23:51.033 11:42:28 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:23:51.033 11:42:28 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:23:51.033 11:42:28 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:23:51.033 11:42:28 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:23:51.033 11:42:28 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:23:51.384 11:42:28 keyring_file -- keyring/file.sh@65 -- # (( 1 == 1 )) 00:23:51.384 11:42:28 keyring_file -- keyring/file.sh@66 -- # get_refcnt key1 00:23:51.384 11:42:28 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:23:51.384 11:42:28 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:23:51.384 11:42:28 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:23:51.384 11:42:28 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:23:51.384 11:42:28 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:23:51.644 11:42:28 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:23:51.644 11:42:28 keyring_file -- keyring/file.sh@69 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:23:51.644 11:42:28 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:23:51.644 11:42:28 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:23:51.644 11:42:28 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:23:51.644 11:42:28 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:51.644 11:42:28 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:23:51.645 11:42:28 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:51.645 11:42:28 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:23:51.645 11:42:28 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:23:51.902 [2024-07-15 11:42:29.280750] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:23:51.902 [2024-07-15 11:42:29.281523] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21b8f30 (107): Transport endpoint is not connected 00:23:51.902 [2024-07-15 11:42:29.282508] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21b8f30 (9): Bad file descriptor 00:23:51.902 [2024-07-15 11:42:29.283504] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:51.902 [2024-07-15 11:42:29.283528] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:23:51.902 [2024-07-15 11:42:29.283538] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:51.902 2024/07/15 11:42:29 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostnqn:nqn.2016-06.io.spdk:host0 name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) psk:key1 subnqn:nqn.2016-06.io.spdk:cnode0 traddr:127.0.0.1 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:23:51.902 request: 00:23:51.902 { 00:23:51.902 "method": "bdev_nvme_attach_controller", 00:23:51.902 "params": { 00:23:51.902 "name": "nvme0", 00:23:51.902 "trtype": "tcp", 00:23:51.902 "traddr": "127.0.0.1", 00:23:51.902 "adrfam": "ipv4", 00:23:51.902 "trsvcid": "4420", 00:23:51.902 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:23:51.902 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:23:51.902 "prchk_reftag": false, 00:23:51.902 "prchk_guard": false, 00:23:51.902 "hdgst": false, 00:23:51.902 "ddgst": false, 00:23:51.902 "psk": "key1" 00:23:51.902 } 00:23:51.902 } 00:23:51.902 Got JSON-RPC error response 00:23:51.902 GoRPCClient: error on JSON-RPC call 00:23:51.902 11:42:29 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:23:51.902 11:42:29 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:23:51.902 11:42:29 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:23:51.902 11:42:29 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:23:51.902 11:42:29 keyring_file -- keyring/file.sh@71 -- # get_refcnt key0 00:23:51.902 11:42:29 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:23:51.902 11:42:29 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:23:51.902 11:42:29 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:23:51.902 11:42:29 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:23:51.902 11:42:29 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:23:52.468 11:42:29 keyring_file -- keyring/file.sh@71 -- # (( 1 == 1 )) 00:23:52.468 11:42:29 keyring_file -- keyring/file.sh@72 -- # get_refcnt key1 00:23:52.468 11:42:29 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:23:52.468 11:42:29 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:23:52.468 11:42:29 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:23:52.468 11:42:29 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:23:52.468 11:42:29 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:23:52.726 11:42:30 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:23:52.726 11:42:30 keyring_file -- keyring/file.sh@75 -- # bperf_cmd keyring_file_remove_key key0 00:23:52.726 11:42:30 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:23:52.984 11:42:30 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key1 00:23:52.984 11:42:30 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:23:53.241 11:42:30 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_get_keys 00:23:53.241 11:42:30 keyring_file -- keyring/file.sh@77 -- # jq length 00:23:53.241 11:42:30 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:23:53.805 11:42:31 keyring_file -- keyring/file.sh@77 -- # (( 0 == 0 )) 00:23:53.805 11:42:31 keyring_file -- keyring/file.sh@80 -- # chmod 0660 /tmp/tmp.ae8W3wGPxR 00:23:53.805 11:42:31 keyring_file -- keyring/file.sh@81 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.ae8W3wGPxR 00:23:53.805 11:42:31 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:23:53.805 11:42:31 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.ae8W3wGPxR 00:23:53.805 11:42:31 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:23:53.805 11:42:31 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:53.805 11:42:31 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:23:53.805 11:42:31 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:53.805 11:42:31 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.ae8W3wGPxR 00:23:53.805 11:42:31 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.ae8W3wGPxR 00:23:54.061 [2024-07-15 11:42:31.434033] keyring.c: 34:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.ae8W3wGPxR': 0100660 00:23:54.061 [2024-07-15 11:42:31.434088] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:23:54.061 2024/07/15 11:42:31 error on JSON-RPC call, method: keyring_file_add_key, params: map[name:key0 path:/tmp/tmp.ae8W3wGPxR], err: error received for keyring_file_add_key method, err: Code=-1 Msg=Operation not permitted 00:23:54.061 request: 00:23:54.061 { 00:23:54.061 "method": "keyring_file_add_key", 00:23:54.061 "params": { 00:23:54.061 "name": "key0", 00:23:54.061 "path": "/tmp/tmp.ae8W3wGPxR" 00:23:54.061 } 00:23:54.061 } 00:23:54.061 Got JSON-RPC error response 00:23:54.061 GoRPCClient: error on JSON-RPC call 00:23:54.061 11:42:31 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:23:54.061 11:42:31 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:23:54.061 11:42:31 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:23:54.061 11:42:31 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:23:54.061 11:42:31 keyring_file -- keyring/file.sh@84 -- # chmod 0600 /tmp/tmp.ae8W3wGPxR 00:23:54.061 11:42:31 keyring_file -- keyring/file.sh@85 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.ae8W3wGPxR 00:23:54.061 11:42:31 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.ae8W3wGPxR 00:23:54.625 11:42:31 keyring_file -- keyring/file.sh@86 -- # rm -f /tmp/tmp.ae8W3wGPxR 00:23:54.625 11:42:31 keyring_file -- keyring/file.sh@88 -- # get_refcnt key0 00:23:54.625 11:42:31 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:23:54.625 11:42:31 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:23:54.625 11:42:31 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:23:54.625 11:42:31 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:23:54.625 11:42:31 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:23:54.882 11:42:32 keyring_file -- keyring/file.sh@88 -- # (( 1 == 1 )) 00:23:54.882 11:42:32 keyring_file -- keyring/file.sh@90 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:23:54.882 11:42:32 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:23:54.882 11:42:32 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:23:54.882 11:42:32 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:23:54.882 11:42:32 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:54.882 11:42:32 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:23:54.882 11:42:32 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:54.882 11:42:32 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:23:54.882 11:42:32 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:23:55.139 [2024-07-15 11:42:32.554241] keyring.c: 29:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.ae8W3wGPxR': No such file or directory 00:23:55.139 [2024-07-15 11:42:32.554295] nvme_tcp.c:2582:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:23:55.139 [2024-07-15 11:42:32.554322] nvme.c: 683:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:23:55.139 [2024-07-15 11:42:32.554331] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:23:55.139 [2024-07-15 11:42:32.554339] bdev_nvme.c:6268:bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:23:55.139 2024/07/15 11:42:32 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostnqn:nqn.2016-06.io.spdk:host0 name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) psk:key0 subnqn:nqn.2016-06.io.spdk:cnode0 traddr:127.0.0.1 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-19 Msg=No such device 00:23:55.139 request: 00:23:55.139 { 00:23:55.139 "method": "bdev_nvme_attach_controller", 00:23:55.139 "params": { 00:23:55.139 "name": "nvme0", 00:23:55.139 "trtype": "tcp", 00:23:55.139 "traddr": "127.0.0.1", 00:23:55.139 "adrfam": "ipv4", 00:23:55.139 "trsvcid": "4420", 00:23:55.139 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:23:55.139 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:23:55.139 "prchk_reftag": false, 00:23:55.139 "prchk_guard": false, 00:23:55.139 "hdgst": false, 00:23:55.139 "ddgst": false, 00:23:55.139 "psk": "key0" 00:23:55.139 } 00:23:55.139 } 00:23:55.139 Got JSON-RPC error response 00:23:55.139 GoRPCClient: error on JSON-RPC call 00:23:55.139 11:42:32 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:23:55.139 11:42:32 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:23:55.139 11:42:32 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:23:55.139 11:42:32 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:23:55.139 11:42:32 keyring_file -- keyring/file.sh@92 -- # bperf_cmd keyring_file_remove_key key0 00:23:55.139 11:42:32 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:23:55.702 11:42:32 keyring_file -- keyring/file.sh@95 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:23:55.702 11:42:32 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:23:55.702 11:42:32 keyring_file -- keyring/common.sh@17 -- # name=key0 00:23:55.702 11:42:32 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:23:55.702 11:42:32 keyring_file -- keyring/common.sh@17 -- # digest=0 00:23:55.702 11:42:32 keyring_file -- keyring/common.sh@18 -- # mktemp 00:23:55.702 11:42:32 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.EEa1j9bFh8 00:23:55.702 11:42:32 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:23:55.702 11:42:32 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:23:55.702 11:42:32 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:23:55.702 11:42:32 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:23:55.702 11:42:32 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:23:55.702 11:42:32 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:23:55.702 11:42:32 keyring_file -- nvmf/common.sh@705 -- # python - 00:23:55.702 11:42:32 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.EEa1j9bFh8 00:23:55.702 11:42:32 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.EEa1j9bFh8 00:23:55.702 11:42:32 keyring_file -- keyring/file.sh@95 -- # key0path=/tmp/tmp.EEa1j9bFh8 00:23:55.702 11:42:32 keyring_file -- keyring/file.sh@96 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.EEa1j9bFh8 00:23:55.702 11:42:32 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.EEa1j9bFh8 00:23:55.959 11:42:33 keyring_file -- keyring/file.sh@97 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:23:55.959 11:42:33 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:23:56.524 nvme0n1 00:23:56.524 11:42:33 keyring_file -- keyring/file.sh@99 -- # get_refcnt key0 00:23:56.524 11:42:33 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:23:56.524 11:42:33 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:23:56.524 11:42:33 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:23:56.524 11:42:33 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:23:56.524 11:42:33 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:23:56.782 11:42:34 keyring_file -- keyring/file.sh@99 -- # (( 2 == 2 )) 00:23:56.782 11:42:34 keyring_file -- keyring/file.sh@100 -- # bperf_cmd keyring_file_remove_key key0 00:23:56.782 11:42:34 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:23:57.346 11:42:34 keyring_file -- keyring/file.sh@101 -- # get_key key0 00:23:57.346 11:42:34 keyring_file -- keyring/file.sh@101 -- # jq -r .removed 00:23:57.346 11:42:34 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:23:57.346 11:42:34 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:23:57.346 11:42:34 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:23:57.603 11:42:34 keyring_file -- keyring/file.sh@101 -- # [[ true == \t\r\u\e ]] 00:23:57.603 11:42:34 keyring_file -- keyring/file.sh@102 -- # get_refcnt key0 00:23:57.603 11:42:34 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:23:57.603 11:42:34 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:23:57.603 11:42:34 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:23:57.603 11:42:34 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:23:57.603 11:42:34 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:23:57.860 11:42:35 keyring_file -- keyring/file.sh@102 -- # (( 1 == 1 )) 00:23:57.860 11:42:35 keyring_file -- keyring/file.sh@103 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:23:57.860 11:42:35 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:23:58.425 11:42:35 keyring_file -- keyring/file.sh@104 -- # bperf_cmd keyring_get_keys 00:23:58.425 11:42:35 keyring_file -- keyring/file.sh@104 -- # jq length 00:23:58.425 11:42:35 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:23:58.682 11:42:36 keyring_file -- keyring/file.sh@104 -- # (( 0 == 0 )) 00:23:58.682 11:42:36 keyring_file -- keyring/file.sh@107 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.EEa1j9bFh8 00:23:58.682 11:42:36 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.EEa1j9bFh8 00:23:58.940 11:42:36 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.mW5SiNsI0K 00:23:58.940 11:42:36 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.mW5SiNsI0K 00:23:59.197 11:42:36 keyring_file -- keyring/file.sh@109 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:23:59.197 11:42:36 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:23:59.762 nvme0n1 00:23:59.762 11:42:37 keyring_file -- keyring/file.sh@112 -- # bperf_cmd save_config 00:23:59.763 11:42:37 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:24:00.021 11:42:37 keyring_file -- keyring/file.sh@112 -- # config='{ 00:24:00.021 "subsystems": [ 00:24:00.021 { 00:24:00.021 "subsystem": "keyring", 00:24:00.021 "config": [ 00:24:00.021 { 00:24:00.021 "method": "keyring_file_add_key", 00:24:00.021 "params": { 00:24:00.021 "name": "key0", 00:24:00.021 "path": "/tmp/tmp.EEa1j9bFh8" 00:24:00.021 } 00:24:00.021 }, 00:24:00.021 { 00:24:00.021 "method": "keyring_file_add_key", 00:24:00.021 "params": { 00:24:00.021 "name": "key1", 00:24:00.021 "path": "/tmp/tmp.mW5SiNsI0K" 00:24:00.021 } 00:24:00.021 } 00:24:00.021 ] 00:24:00.021 }, 00:24:00.021 { 00:24:00.021 "subsystem": "iobuf", 00:24:00.021 "config": [ 00:24:00.021 { 00:24:00.021 "method": "iobuf_set_options", 00:24:00.021 "params": { 00:24:00.021 "large_bufsize": 135168, 00:24:00.021 "large_pool_count": 1024, 00:24:00.021 "small_bufsize": 8192, 00:24:00.021 "small_pool_count": 8192 00:24:00.021 } 00:24:00.021 } 00:24:00.021 ] 00:24:00.021 }, 00:24:00.021 { 00:24:00.021 "subsystem": "sock", 00:24:00.021 "config": [ 00:24:00.021 { 00:24:00.021 "method": "sock_set_default_impl", 00:24:00.021 "params": { 00:24:00.021 "impl_name": "posix" 00:24:00.021 } 00:24:00.021 }, 00:24:00.021 { 00:24:00.021 "method": "sock_impl_set_options", 00:24:00.021 "params": { 00:24:00.021 "enable_ktls": false, 00:24:00.021 "enable_placement_id": 0, 00:24:00.021 "enable_quickack": false, 00:24:00.021 "enable_recv_pipe": true, 00:24:00.021 "enable_zerocopy_send_client": false, 00:24:00.021 "enable_zerocopy_send_server": true, 00:24:00.021 "impl_name": "ssl", 00:24:00.021 "recv_buf_size": 4096, 00:24:00.021 "send_buf_size": 4096, 00:24:00.021 "tls_version": 0, 00:24:00.021 "zerocopy_threshold": 0 00:24:00.021 } 00:24:00.021 }, 00:24:00.021 { 00:24:00.021 "method": "sock_impl_set_options", 00:24:00.021 "params": { 00:24:00.021 "enable_ktls": false, 00:24:00.021 "enable_placement_id": 0, 00:24:00.021 "enable_quickack": false, 00:24:00.021 "enable_recv_pipe": true, 00:24:00.021 "enable_zerocopy_send_client": false, 00:24:00.021 "enable_zerocopy_send_server": true, 00:24:00.021 "impl_name": "posix", 00:24:00.021 "recv_buf_size": 2097152, 00:24:00.021 "send_buf_size": 2097152, 00:24:00.021 "tls_version": 0, 00:24:00.021 "zerocopy_threshold": 0 00:24:00.021 } 00:24:00.021 } 00:24:00.021 ] 00:24:00.021 }, 00:24:00.021 { 00:24:00.021 "subsystem": "vmd", 00:24:00.021 "config": [] 00:24:00.021 }, 00:24:00.021 { 00:24:00.021 "subsystem": "accel", 00:24:00.021 "config": [ 00:24:00.021 { 00:24:00.021 "method": "accel_set_options", 00:24:00.021 "params": { 00:24:00.021 "buf_count": 2048, 00:24:00.021 "large_cache_size": 16, 00:24:00.021 "sequence_count": 2048, 00:24:00.021 "small_cache_size": 128, 00:24:00.021 "task_count": 2048 00:24:00.021 } 00:24:00.021 } 00:24:00.021 ] 00:24:00.021 }, 00:24:00.021 { 00:24:00.021 "subsystem": "bdev", 00:24:00.021 "config": [ 00:24:00.021 { 00:24:00.021 "method": "bdev_set_options", 00:24:00.021 "params": { 00:24:00.021 "bdev_auto_examine": true, 00:24:00.021 "bdev_io_cache_size": 256, 00:24:00.021 "bdev_io_pool_size": 65535, 00:24:00.021 "iobuf_large_cache_size": 16, 00:24:00.021 "iobuf_small_cache_size": 128 00:24:00.021 } 00:24:00.021 }, 00:24:00.021 { 00:24:00.021 "method": "bdev_raid_set_options", 00:24:00.021 "params": { 00:24:00.021 "process_window_size_kb": 1024 00:24:00.021 } 00:24:00.021 }, 00:24:00.021 { 00:24:00.021 "method": "bdev_iscsi_set_options", 00:24:00.021 "params": { 00:24:00.021 "timeout_sec": 30 00:24:00.021 } 00:24:00.021 }, 00:24:00.021 { 00:24:00.021 "method": "bdev_nvme_set_options", 00:24:00.021 "params": { 00:24:00.021 "action_on_timeout": "none", 00:24:00.021 "allow_accel_sequence": false, 00:24:00.021 "arbitration_burst": 0, 00:24:00.021 "bdev_retry_count": 3, 00:24:00.021 "ctrlr_loss_timeout_sec": 0, 00:24:00.021 "delay_cmd_submit": true, 00:24:00.021 "dhchap_dhgroups": [ 00:24:00.021 "null", 00:24:00.021 "ffdhe2048", 00:24:00.021 "ffdhe3072", 00:24:00.021 "ffdhe4096", 00:24:00.021 "ffdhe6144", 00:24:00.021 "ffdhe8192" 00:24:00.022 ], 00:24:00.022 "dhchap_digests": [ 00:24:00.022 "sha256", 00:24:00.022 "sha384", 00:24:00.022 "sha512" 00:24:00.022 ], 00:24:00.022 "disable_auto_failback": false, 00:24:00.022 "fast_io_fail_timeout_sec": 0, 00:24:00.022 "generate_uuids": false, 00:24:00.022 "high_priority_weight": 0, 00:24:00.022 "io_path_stat": false, 00:24:00.022 "io_queue_requests": 512, 00:24:00.022 "keep_alive_timeout_ms": 10000, 00:24:00.022 "low_priority_weight": 0, 00:24:00.022 "medium_priority_weight": 0, 00:24:00.022 "nvme_adminq_poll_period_us": 10000, 00:24:00.022 "nvme_error_stat": false, 00:24:00.022 "nvme_ioq_poll_period_us": 0, 00:24:00.022 "rdma_cm_event_timeout_ms": 0, 00:24:00.022 "rdma_max_cq_size": 0, 00:24:00.022 "rdma_srq_size": 0, 00:24:00.022 "reconnect_delay_sec": 0, 00:24:00.022 "timeout_admin_us": 0, 00:24:00.022 "timeout_us": 0, 00:24:00.022 "transport_ack_timeout": 0, 00:24:00.022 "transport_retry_count": 4, 00:24:00.022 "transport_tos": 0 00:24:00.022 } 00:24:00.022 }, 00:24:00.022 { 00:24:00.022 "method": "bdev_nvme_attach_controller", 00:24:00.022 "params": { 00:24:00.022 "adrfam": "IPv4", 00:24:00.022 "ctrlr_loss_timeout_sec": 0, 00:24:00.022 "ddgst": false, 00:24:00.022 "fast_io_fail_timeout_sec": 0, 00:24:00.022 "hdgst": false, 00:24:00.022 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:24:00.022 "name": "nvme0", 00:24:00.022 "prchk_guard": false, 00:24:00.022 "prchk_reftag": false, 00:24:00.022 "psk": "key0", 00:24:00.022 "reconnect_delay_sec": 0, 00:24:00.022 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:24:00.022 "traddr": "127.0.0.1", 00:24:00.022 "trsvcid": "4420", 00:24:00.022 "trtype": "TCP" 00:24:00.022 } 00:24:00.022 }, 00:24:00.022 { 00:24:00.022 "method": "bdev_nvme_set_hotplug", 00:24:00.022 "params": { 00:24:00.022 "enable": false, 00:24:00.022 "period_us": 100000 00:24:00.022 } 00:24:00.022 }, 00:24:00.022 { 00:24:00.022 "method": "bdev_wait_for_examine" 00:24:00.022 } 00:24:00.022 ] 00:24:00.022 }, 00:24:00.022 { 00:24:00.022 "subsystem": "nbd", 00:24:00.022 "config": [] 00:24:00.022 } 00:24:00.022 ] 00:24:00.022 }' 00:24:00.022 11:42:37 keyring_file -- keyring/file.sh@114 -- # killprocess 100043 00:24:00.022 11:42:37 keyring_file -- common/autotest_common.sh@948 -- # '[' -z 100043 ']' 00:24:00.022 11:42:37 keyring_file -- common/autotest_common.sh@952 -- # kill -0 100043 00:24:00.022 11:42:37 keyring_file -- common/autotest_common.sh@953 -- # uname 00:24:00.022 11:42:37 keyring_file -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:00.022 11:42:37 keyring_file -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 100043 00:24:00.022 killing process with pid 100043 00:24:00.022 Received shutdown signal, test time was about 1.000000 seconds 00:24:00.022 00:24:00.022 Latency(us) 00:24:00.022 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:00.022 =================================================================================================================== 00:24:00.022 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:00.022 11:42:37 keyring_file -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:24:00.022 11:42:37 keyring_file -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:24:00.022 11:42:37 keyring_file -- common/autotest_common.sh@966 -- # echo 'killing process with pid 100043' 00:24:00.022 11:42:37 keyring_file -- common/autotest_common.sh@967 -- # kill 100043 00:24:00.022 11:42:37 keyring_file -- common/autotest_common.sh@972 -- # wait 100043 00:24:00.280 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:24:00.280 11:42:37 keyring_file -- keyring/file.sh@117 -- # bperfpid=100544 00:24:00.280 11:42:37 keyring_file -- keyring/file.sh@119 -- # waitforlisten 100544 /var/tmp/bperf.sock 00:24:00.280 11:42:37 keyring_file -- common/autotest_common.sh@829 -- # '[' -z 100544 ']' 00:24:00.280 11:42:37 keyring_file -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:24:00.280 11:42:37 keyring_file -- keyring/file.sh@115 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:24:00.280 11:42:37 keyring_file -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:00.280 11:42:37 keyring_file -- keyring/file.sh@115 -- # echo '{ 00:24:00.280 "subsystems": [ 00:24:00.280 { 00:24:00.280 "subsystem": "keyring", 00:24:00.280 "config": [ 00:24:00.280 { 00:24:00.280 "method": "keyring_file_add_key", 00:24:00.280 "params": { 00:24:00.280 "name": "key0", 00:24:00.280 "path": "/tmp/tmp.EEa1j9bFh8" 00:24:00.280 } 00:24:00.280 }, 00:24:00.280 { 00:24:00.280 "method": "keyring_file_add_key", 00:24:00.280 "params": { 00:24:00.280 "name": "key1", 00:24:00.280 "path": "/tmp/tmp.mW5SiNsI0K" 00:24:00.280 } 00:24:00.280 } 00:24:00.280 ] 00:24:00.280 }, 00:24:00.280 { 00:24:00.280 "subsystem": "iobuf", 00:24:00.280 "config": [ 00:24:00.280 { 00:24:00.280 "method": "iobuf_set_options", 00:24:00.280 "params": { 00:24:00.280 "large_bufsize": 135168, 00:24:00.280 "large_pool_count": 1024, 00:24:00.280 "small_bufsize": 8192, 00:24:00.280 "small_pool_count": 8192 00:24:00.280 } 00:24:00.280 } 00:24:00.280 ] 00:24:00.280 }, 00:24:00.280 { 00:24:00.280 "subsystem": "sock", 00:24:00.280 "config": [ 00:24:00.280 { 00:24:00.280 "method": "sock_set_default_impl", 00:24:00.280 "params": { 00:24:00.280 "impl_name": "posix" 00:24:00.280 } 00:24:00.280 }, 00:24:00.280 { 00:24:00.280 "method": "sock_impl_set_options", 00:24:00.280 "params": { 00:24:00.280 "enable_ktls": false, 00:24:00.280 "enable_placement_id": 0, 00:24:00.280 "enable_quickack": false, 00:24:00.280 "enable_recv_pipe": true, 00:24:00.280 "enable_zerocopy_send_client": false, 00:24:00.280 "enable_zerocopy_send_server": true, 00:24:00.280 "impl_name": "ssl", 00:24:00.280 "recv_buf_size": 4096, 00:24:00.280 "send_buf_size": 4096, 00:24:00.280 "tls_version": 0, 00:24:00.280 "zerocopy_threshold": 0 00:24:00.280 } 00:24:00.280 }, 00:24:00.280 { 00:24:00.280 "method": "sock_impl_set_options", 00:24:00.280 "params": { 00:24:00.280 "enable_ktls": false, 00:24:00.280 "enable_placement_id": 0, 00:24:00.280 "enable_quickack": false, 00:24:00.280 "enable_recv_pipe": true, 00:24:00.280 "enable_zerocopy_send_client": false, 00:24:00.280 "enable_zerocopy_send_server": true, 00:24:00.280 "impl_name": "posix", 00:24:00.280 "recv_buf_size": 2097152, 00:24:00.280 "send_buf_size": 2097152, 00:24:00.280 "tls_version": 0, 00:24:00.280 "zerocopy_threshold": 0 00:24:00.280 } 00:24:00.280 } 00:24:00.280 ] 00:24:00.280 }, 00:24:00.280 { 00:24:00.280 "subsystem": "vmd", 00:24:00.280 "config": [] 00:24:00.280 }, 00:24:00.280 { 00:24:00.280 "subsystem": "accel", 00:24:00.280 "config": [ 00:24:00.280 { 00:24:00.280 "method": "accel_set_options", 00:24:00.280 "params": { 00:24:00.280 "buf_count": 2048, 00:24:00.280 "large_cache_size": 16, 00:24:00.280 "sequence_count": 2048, 00:24:00.280 "small_cache_size": 128, 00:24:00.280 "task_count": 2048 00:24:00.280 } 00:24:00.280 } 00:24:00.280 ] 00:24:00.280 }, 00:24:00.280 { 00:24:00.280 "subsystem": "bdev", 00:24:00.280 "config": [ 00:24:00.280 { 00:24:00.280 "method": "bdev_set_options", 00:24:00.280 "params": { 00:24:00.280 "bdev_auto_examine": true, 00:24:00.280 "bdev_io_cache_size": 256, 00:24:00.280 "bdev_io_pool_size": 65535, 00:24:00.280 "iobuf_large_cache_size": 16, 00:24:00.280 "iobuf_small_cache_size": 128 00:24:00.280 } 00:24:00.280 }, 00:24:00.280 { 00:24:00.280 "method": "bdev_raid_set_options", 00:24:00.280 "params": { 00:24:00.280 "process_window_size_kb": 1024 00:24:00.280 } 00:24:00.280 }, 00:24:00.280 { 00:24:00.280 "method": "bdev_iscsi_set_options", 00:24:00.280 "params": { 00:24:00.280 "timeout_sec": 30 00:24:00.280 } 00:24:00.280 }, 00:24:00.280 { 00:24:00.280 "method": "bdev_nvme_set_options", 00:24:00.280 "params": { 00:24:00.280 "action_on_timeout": "none", 00:24:00.280 "allow_accel_sequence": false, 00:24:00.280 "arbitration_burst": 0, 00:24:00.280 "bdev_retry_count": 3, 00:24:00.280 "ctrlr_loss_timeout_sec": 0, 00:24:00.280 "delay_cmd_submit": true, 00:24:00.280 "dhchap_dhgroups": [ 00:24:00.280 "null", 00:24:00.280 "ffdhe2048", 00:24:00.280 "ffdhe3072", 00:24:00.280 "ffdhe4096", 00:24:00.280 "ffdhe6144", 00:24:00.280 "ffdhe8192" 00:24:00.280 ], 00:24:00.280 "dhchap_digests": [ 00:24:00.280 "sha256", 00:24:00.280 "sha384", 00:24:00.281 "sha512" 00:24:00.281 ], 00:24:00.281 "disable_auto_failback": false, 00:24:00.281 "fast_io_fail_timeout_sec": 0, 00:24:00.281 "generate_uuids": false, 00:24:00.281 "high_priority_weight": 0, 00:24:00.281 "io_path_stat": false, 00:24:00.281 "io_queue_requests": 512, 00:24:00.281 "keep_alive_timeout_ms": 10000, 00:24:00.281 "low_priority_weight": 0, 00:24:00.281 "medium_priority_weight": 0, 00:24:00.281 "nvme_adminq_poll_period_us": 10000, 00:24:00.281 "nvme_error_stat": false, 00:24:00.281 "nvme_ioq_poll_period_us": 0, 00:24:00.281 "rdma_cm_event_timeout_ms": 0, 00:24:00.281 "rdma_max_cq_size": 0, 00:24:00.281 "rdma_srq_size": 0, 00:24:00.281 "reconnect_delay_sec": 0, 00:24:00.281 "timeout_admin_us": 0, 00:24:00.281 "timeout_us": 0, 00:24:00.281 "transport_ack_timeout": 0, 00:24:00.281 "transport_retry_count": 4, 00:24:00.281 "transport_tos": 0 00:24:00.281 } 00:24:00.281 }, 00:24:00.281 { 00:24:00.281 "method": "bdev_nvme_attach_controller", 00:24:00.281 "params": { 00:24:00.281 "adrfam": "IPv4", 00:24:00.281 "ctrlr_loss_timeout_sec": 0, 00:24:00.281 "ddgst": false, 00:24:00.281 "fast_io_fail_timeout_sec": 0, 00:24:00.281 "hdgst": false, 00:24:00.281 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:24:00.281 "name": "nvme0", 00:24:00.281 "prchk_guard": false, 00:24:00.281 "prchk_reftag": false, 00:24:00.281 "psk": "key0", 00:24:00.281 "reconnect_delay_sec": 0, 00:24:00.281 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:24:00.281 "traddr": "127.0.0.1", 00:24:00.281 "trsvcid": "4420", 00:24:00.281 "trtype": "TCP" 00:24:00.281 } 00:24:00.281 }, 00:24:00.281 { 00:24:00.281 "method": "bdev_nvme_set_hotplug", 00:24:00.281 "params": { 00:24:00.281 "enable": false, 00:24:00.281 "period_us": 100000 00:24:00.281 } 00:24:00.281 }, 00:24:00.281 { 00:24:00.281 "method": "bdev_wait_for_examine" 00:24:00.281 } 00:24:00.281 ] 00:24:00.281 }, 00:24:00.281 { 00:24:00.281 "subsystem": "nbd", 00:24:00.281 "config": [] 00:24:00.281 } 00:24:00.281 ] 00:24:00.281 }' 00:24:00.281 11:42:37 keyring_file -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:24:00.281 11:42:37 keyring_file -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:00.281 11:42:37 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:24:00.281 [2024-07-15 11:42:37.686388] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:24:00.281 [2024-07-15 11:42:37.686514] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid100544 ] 00:24:00.538 [2024-07-15 11:42:37.825460] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:00.538 [2024-07-15 11:42:37.892217] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:00.795 [2024-07-15 11:42:38.035337] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:01.360 11:42:38 keyring_file -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:01.360 11:42:38 keyring_file -- common/autotest_common.sh@862 -- # return 0 00:24:01.360 11:42:38 keyring_file -- keyring/file.sh@120 -- # bperf_cmd keyring_get_keys 00:24:01.360 11:42:38 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:24:01.360 11:42:38 keyring_file -- keyring/file.sh@120 -- # jq length 00:24:01.925 11:42:39 keyring_file -- keyring/file.sh@120 -- # (( 2 == 2 )) 00:24:01.925 11:42:39 keyring_file -- keyring/file.sh@121 -- # get_refcnt key0 00:24:01.925 11:42:39 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:24:01.925 11:42:39 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:24:01.925 11:42:39 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:24:01.925 11:42:39 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:24:01.925 11:42:39 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:24:02.490 11:42:39 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:24:02.490 11:42:39 keyring_file -- keyring/file.sh@122 -- # get_refcnt key1 00:24:02.490 11:42:39 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:24:02.490 11:42:39 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:24:02.490 11:42:39 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:24:02.490 11:42:39 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:24:02.490 11:42:39 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:24:02.748 11:42:39 keyring_file -- keyring/file.sh@122 -- # (( 1 == 1 )) 00:24:02.748 11:42:39 keyring_file -- keyring/file.sh@123 -- # bperf_cmd bdev_nvme_get_controllers 00:24:02.748 11:42:39 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:24:02.748 11:42:39 keyring_file -- keyring/file.sh@123 -- # jq -r '.[].name' 00:24:03.006 11:42:40 keyring_file -- keyring/file.sh@123 -- # [[ nvme0 == nvme0 ]] 00:24:03.006 11:42:40 keyring_file -- keyring/file.sh@1 -- # cleanup 00:24:03.006 11:42:40 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.EEa1j9bFh8 /tmp/tmp.mW5SiNsI0K 00:24:03.006 11:42:40 keyring_file -- keyring/file.sh@20 -- # killprocess 100544 00:24:03.006 11:42:40 keyring_file -- common/autotest_common.sh@948 -- # '[' -z 100544 ']' 00:24:03.006 11:42:40 keyring_file -- common/autotest_common.sh@952 -- # kill -0 100544 00:24:03.006 11:42:40 keyring_file -- common/autotest_common.sh@953 -- # uname 00:24:03.006 11:42:40 keyring_file -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:03.006 11:42:40 keyring_file -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 100544 00:24:03.006 11:42:40 keyring_file -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:24:03.006 killing process with pid 100544 00:24:03.006 Received shutdown signal, test time was about 1.000000 seconds 00:24:03.006 00:24:03.006 Latency(us) 00:24:03.006 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:03.006 =================================================================================================================== 00:24:03.006 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:24:03.006 11:42:40 keyring_file -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:24:03.006 11:42:40 keyring_file -- common/autotest_common.sh@966 -- # echo 'killing process with pid 100544' 00:24:03.006 11:42:40 keyring_file -- common/autotest_common.sh@967 -- # kill 100544 00:24:03.006 11:42:40 keyring_file -- common/autotest_common.sh@972 -- # wait 100544 00:24:03.263 11:42:40 keyring_file -- keyring/file.sh@21 -- # killprocess 100023 00:24:03.263 11:42:40 keyring_file -- common/autotest_common.sh@948 -- # '[' -z 100023 ']' 00:24:03.263 11:42:40 keyring_file -- common/autotest_common.sh@952 -- # kill -0 100023 00:24:03.263 11:42:40 keyring_file -- common/autotest_common.sh@953 -- # uname 00:24:03.263 11:42:40 keyring_file -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:03.263 11:42:40 keyring_file -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 100023 00:24:03.263 killing process with pid 100023 00:24:03.263 11:42:40 keyring_file -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:24:03.263 11:42:40 keyring_file -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:24:03.263 11:42:40 keyring_file -- common/autotest_common.sh@966 -- # echo 'killing process with pid 100023' 00:24:03.263 11:42:40 keyring_file -- common/autotest_common.sh@967 -- # kill 100023 00:24:03.263 [2024-07-15 11:42:40.605427] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:24:03.263 11:42:40 keyring_file -- common/autotest_common.sh@972 -- # wait 100023 00:24:03.521 ************************************ 00:24:03.521 END TEST keyring_file 00:24:03.521 ************************************ 00:24:03.521 00:24:03.521 real 0m19.095s 00:24:03.521 user 0m50.423s 00:24:03.521 sys 0m3.353s 00:24:03.521 11:42:40 keyring_file -- common/autotest_common.sh@1124 -- # xtrace_disable 00:24:03.521 11:42:40 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:24:03.521 11:42:40 -- common/autotest_common.sh@1142 -- # return 0 00:24:03.521 11:42:40 -- spdk/autotest.sh@296 -- # [[ y == y ]] 00:24:03.521 11:42:40 -- spdk/autotest.sh@297 -- # run_test keyring_linux /home/vagrant/spdk_repo/spdk/test/keyring/linux.sh 00:24:03.521 11:42:40 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:24:03.521 11:42:40 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:03.521 11:42:40 -- common/autotest_common.sh@10 -- # set +x 00:24:03.521 ************************************ 00:24:03.521 START TEST keyring_linux 00:24:03.521 ************************************ 00:24:03.521 11:42:40 keyring_linux -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/keyring/linux.sh 00:24:03.778 * Looking for test storage... 00:24:03.778 * Found test storage at /home/vagrant/spdk_repo/spdk/test/keyring 00:24:03.778 11:42:41 keyring_linux -- keyring/linux.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/keyring/common.sh 00:24:03.778 11:42:41 keyring_linux -- keyring/common.sh@4 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:24:03.778 11:42:41 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:24:03.778 11:42:41 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:03.778 11:42:41 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:03.778 11:42:41 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:03.778 11:42:41 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:03.778 11:42:41 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:03.778 11:42:41 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:03.778 11:42:41 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:03.778 11:42:41 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:03.778 11:42:41 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:03.778 11:42:41 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:03.778 11:42:41 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:891080d4-f96c-4735-b9e2-e3ce9892e421 00:24:03.778 11:42:41 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=891080d4-f96c-4735-b9e2-e3ce9892e421 00:24:03.779 11:42:41 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:03.779 11:42:41 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:03.779 11:42:41 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:24:03.779 11:42:41 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:03.779 11:42:41 keyring_linux -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:24:03.779 11:42:41 keyring_linux -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:03.779 11:42:41 keyring_linux -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:03.779 11:42:41 keyring_linux -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:03.779 11:42:41 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:03.779 11:42:41 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:03.779 11:42:41 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:03.779 11:42:41 keyring_linux -- paths/export.sh@5 -- # export PATH 00:24:03.779 11:42:41 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:03.779 11:42:41 keyring_linux -- nvmf/common.sh@47 -- # : 0 00:24:03.779 11:42:41 keyring_linux -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:03.779 11:42:41 keyring_linux -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:03.779 11:42:41 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:03.779 11:42:41 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:03.779 11:42:41 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:03.779 11:42:41 keyring_linux -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:03.779 11:42:41 keyring_linux -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:03.779 11:42:41 keyring_linux -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:03.779 11:42:41 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:24:03.779 11:42:41 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:24:03.779 11:42:41 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:24:03.779 11:42:41 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:24:03.779 11:42:41 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:24:03.779 11:42:41 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:24:03.779 11:42:41 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:24:03.779 11:42:41 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:24:03.779 11:42:41 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:24:03.779 11:42:41 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:24:03.779 11:42:41 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:24:03.779 11:42:41 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:24:03.779 11:42:41 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:24:03.779 11:42:41 keyring_linux -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:24:03.779 11:42:41 keyring_linux -- nvmf/common.sh@702 -- # local prefix key digest 00:24:03.779 11:42:41 keyring_linux -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:24:03.779 11:42:41 keyring_linux -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:24:03.779 11:42:41 keyring_linux -- nvmf/common.sh@704 -- # digest=0 00:24:03.779 11:42:41 keyring_linux -- nvmf/common.sh@705 -- # python - 00:24:03.779 11:42:41 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:24:03.779 /tmp/:spdk-test:key0 00:24:03.779 11:42:41 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:24:03.779 11:42:41 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:24:03.779 11:42:41 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:24:03.779 11:42:41 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:24:03.779 11:42:41 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:24:03.779 11:42:41 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:24:03.779 11:42:41 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:24:03.779 11:42:41 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:24:03.779 11:42:41 keyring_linux -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:24:03.779 11:42:41 keyring_linux -- nvmf/common.sh@702 -- # local prefix key digest 00:24:03.779 11:42:41 keyring_linux -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:24:03.779 11:42:41 keyring_linux -- nvmf/common.sh@704 -- # key=112233445566778899aabbccddeeff00 00:24:03.779 11:42:41 keyring_linux -- nvmf/common.sh@704 -- # digest=0 00:24:03.779 11:42:41 keyring_linux -- nvmf/common.sh@705 -- # python - 00:24:03.779 11:42:41 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:24:03.779 /tmp/:spdk-test:key1 00:24:03.779 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:03.779 11:42:41 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:24:03.779 11:42:41 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=100694 00:24:03.779 11:42:41 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 100694 00:24:03.779 11:42:41 keyring_linux -- keyring/linux.sh@50 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:24:03.779 11:42:41 keyring_linux -- common/autotest_common.sh@829 -- # '[' -z 100694 ']' 00:24:03.779 11:42:41 keyring_linux -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:03.779 11:42:41 keyring_linux -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:03.779 11:42:41 keyring_linux -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:03.779 11:42:41 keyring_linux -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:03.779 11:42:41 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:24:03.779 [2024-07-15 11:42:41.224250] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:24:03.779 [2024-07-15 11:42:41.224382] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid100694 ] 00:24:04.037 [2024-07-15 11:42:41.366082] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:04.037 [2024-07-15 11:42:41.426438] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:04.294 11:42:41 keyring_linux -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:04.294 11:42:41 keyring_linux -- common/autotest_common.sh@862 -- # return 0 00:24:04.294 11:42:41 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:24:04.294 11:42:41 keyring_linux -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:04.295 11:42:41 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:24:04.295 [2024-07-15 11:42:41.609361] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:04.295 null0 00:24:04.295 [2024-07-15 11:42:41.641316] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:24:04.295 [2024-07-15 11:42:41.641614] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:24:04.295 11:42:41 keyring_linux -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:04.295 11:42:41 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:24:04.295 650369256 00:24:04.295 11:42:41 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:24:04.295 483046599 00:24:04.295 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:24:04.295 11:42:41 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=100718 00:24:04.295 11:42:41 keyring_linux -- keyring/linux.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:24:04.295 11:42:41 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 100718 /var/tmp/bperf.sock 00:24:04.295 11:42:41 keyring_linux -- common/autotest_common.sh@829 -- # '[' -z 100718 ']' 00:24:04.295 11:42:41 keyring_linux -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:24:04.295 11:42:41 keyring_linux -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:04.295 11:42:41 keyring_linux -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:24:04.295 11:42:41 keyring_linux -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:04.295 11:42:41 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:24:04.295 [2024-07-15 11:42:41.736538] Starting SPDK v24.09-pre git sha1 e7cce062d / DPDK 24.03.0 initialization... 00:24:04.295 [2024-07-15 11:42:41.736705] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid100718 ] 00:24:04.552 [2024-07-15 11:42:41.878358] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:04.552 [2024-07-15 11:42:41.941613] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:05.484 11:42:42 keyring_linux -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:05.484 11:42:42 keyring_linux -- common/autotest_common.sh@862 -- # return 0 00:24:05.484 11:42:42 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:24:05.484 11:42:42 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:24:05.742 11:42:43 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:24:05.742 11:42:43 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:24:06.307 11:42:43 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:24:06.307 11:42:43 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:24:06.572 [2024-07-15 11:42:43.823914] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:06.572 nvme0n1 00:24:06.572 11:42:43 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:24:06.572 11:42:43 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:24:06.572 11:42:43 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:24:06.572 11:42:43 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:24:06.572 11:42:43 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:24:06.572 11:42:43 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:24:06.848 11:42:44 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:24:06.848 11:42:44 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:24:06.848 11:42:44 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:24:06.848 11:42:44 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:24:06.848 11:42:44 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:24:06.848 11:42:44 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:24:06.848 11:42:44 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:24:07.413 11:42:44 keyring_linux -- keyring/linux.sh@25 -- # sn=650369256 00:24:07.413 11:42:44 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:24:07.413 11:42:44 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:24:07.413 11:42:44 keyring_linux -- keyring/linux.sh@26 -- # [[ 650369256 == \6\5\0\3\6\9\2\5\6 ]] 00:24:07.413 11:42:44 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 650369256 00:24:07.413 11:42:44 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:24:07.413 11:42:44 keyring_linux -- keyring/linux.sh@79 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:24:07.413 Running I/O for 1 seconds... 00:24:08.784 00:24:08.784 Latency(us) 00:24:08.784 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:08.784 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:24:08.784 nvme0n1 : 1.02 9975.14 38.97 0.00 0.00 12718.76 4140.68 18826.71 00:24:08.784 =================================================================================================================== 00:24:08.784 Total : 9975.14 38.97 0.00 0.00 12718.76 4140.68 18826.71 00:24:08.784 0 00:24:08.784 11:42:45 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:24:08.784 11:42:45 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:24:08.784 11:42:46 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:24:08.784 11:42:46 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:24:08.784 11:42:46 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:24:08.784 11:42:46 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:24:08.784 11:42:46 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:24:08.784 11:42:46 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:24:09.349 11:42:46 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:24:09.349 11:42:46 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:24:09.349 11:42:46 keyring_linux -- keyring/linux.sh@23 -- # return 00:24:09.349 11:42:46 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:24:09.349 11:42:46 keyring_linux -- common/autotest_common.sh@648 -- # local es=0 00:24:09.349 11:42:46 keyring_linux -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:24:09.349 11:42:46 keyring_linux -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:24:09.349 11:42:46 keyring_linux -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:09.349 11:42:46 keyring_linux -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:24:09.349 11:42:46 keyring_linux -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:09.349 11:42:46 keyring_linux -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:24:09.349 11:42:46 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:24:09.349 [2024-07-15 11:42:46.817220] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:24:09.349 [2024-07-15 11:42:46.817426] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11aaea0 (107): Transport endpoint is not connected 00:24:09.349 [2024-07-15 11:42:46.818413] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11aaea0 (9): Bad file descriptor 00:24:09.349 [2024-07-15 11:42:46.819409] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:24:09.349 [2024-07-15 11:42:46.819433] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:24:09.349 [2024-07-15 11:42:46.819444] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:24:09.349 2024/07/15 11:42:46 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostnqn:nqn.2016-06.io.spdk:host0 name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) psk::spdk-test:key1 subnqn:nqn.2016-06.io.spdk:cnode0 traddr:127.0.0.1 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:24:09.349 request: 00:24:09.349 { 00:24:09.349 "method": "bdev_nvme_attach_controller", 00:24:09.349 "params": { 00:24:09.349 "name": "nvme0", 00:24:09.349 "trtype": "tcp", 00:24:09.349 "traddr": "127.0.0.1", 00:24:09.349 "adrfam": "ipv4", 00:24:09.349 "trsvcid": "4420", 00:24:09.349 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:24:09.349 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:24:09.349 "prchk_reftag": false, 00:24:09.349 "prchk_guard": false, 00:24:09.349 "hdgst": false, 00:24:09.349 "ddgst": false, 00:24:09.349 "psk": ":spdk-test:key1" 00:24:09.349 } 00:24:09.349 } 00:24:09.349 Got JSON-RPC error response 00:24:09.349 GoRPCClient: error on JSON-RPC call 00:24:09.607 11:42:46 keyring_linux -- common/autotest_common.sh@651 -- # es=1 00:24:09.607 11:42:46 keyring_linux -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:24:09.607 11:42:46 keyring_linux -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:24:09.607 11:42:46 keyring_linux -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:24:09.607 11:42:46 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:24:09.607 11:42:46 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:24:09.607 11:42:46 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:24:09.607 11:42:46 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:24:09.607 11:42:46 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:24:09.607 11:42:46 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:24:09.607 11:42:46 keyring_linux -- keyring/linux.sh@33 -- # sn=650369256 00:24:09.607 11:42:46 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 650369256 00:24:09.607 1 links removed 00:24:09.607 11:42:46 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:24:09.607 11:42:46 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:24:09.607 11:42:46 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:24:09.607 11:42:46 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:24:09.607 11:42:46 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:24:09.607 11:42:46 keyring_linux -- keyring/linux.sh@33 -- # sn=483046599 00:24:09.607 11:42:46 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 483046599 00:24:09.607 1 links removed 00:24:09.607 11:42:46 keyring_linux -- keyring/linux.sh@41 -- # killprocess 100718 00:24:09.607 11:42:46 keyring_linux -- common/autotest_common.sh@948 -- # '[' -z 100718 ']' 00:24:09.607 11:42:46 keyring_linux -- common/autotest_common.sh@952 -- # kill -0 100718 00:24:09.607 11:42:46 keyring_linux -- common/autotest_common.sh@953 -- # uname 00:24:09.607 11:42:46 keyring_linux -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:09.607 11:42:46 keyring_linux -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 100718 00:24:09.607 11:42:46 keyring_linux -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:24:09.607 11:42:46 keyring_linux -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:24:09.607 killing process with pid 100718 00:24:09.607 11:42:46 keyring_linux -- common/autotest_common.sh@966 -- # echo 'killing process with pid 100718' 00:24:09.607 11:42:46 keyring_linux -- common/autotest_common.sh@967 -- # kill 100718 00:24:09.607 Received shutdown signal, test time was about 1.000000 seconds 00:24:09.607 00:24:09.607 Latency(us) 00:24:09.607 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:09.607 =================================================================================================================== 00:24:09.607 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:09.607 11:42:46 keyring_linux -- common/autotest_common.sh@972 -- # wait 100718 00:24:09.607 11:42:47 keyring_linux -- keyring/linux.sh@42 -- # killprocess 100694 00:24:09.607 11:42:47 keyring_linux -- common/autotest_common.sh@948 -- # '[' -z 100694 ']' 00:24:09.607 11:42:47 keyring_linux -- common/autotest_common.sh@952 -- # kill -0 100694 00:24:09.607 11:42:47 keyring_linux -- common/autotest_common.sh@953 -- # uname 00:24:09.607 11:42:47 keyring_linux -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:09.607 11:42:47 keyring_linux -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 100694 00:24:09.607 11:42:47 keyring_linux -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:24:09.607 11:42:47 keyring_linux -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:24:09.607 killing process with pid 100694 00:24:09.607 11:42:47 keyring_linux -- common/autotest_common.sh@966 -- # echo 'killing process with pid 100694' 00:24:09.607 11:42:47 keyring_linux -- common/autotest_common.sh@967 -- # kill 100694 00:24:09.607 11:42:47 keyring_linux -- common/autotest_common.sh@972 -- # wait 100694 00:24:09.864 00:24:09.864 real 0m6.388s 00:24:09.864 user 0m13.813s 00:24:09.864 sys 0m1.481s 00:24:09.864 11:42:47 keyring_linux -- common/autotest_common.sh@1124 -- # xtrace_disable 00:24:09.864 11:42:47 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:24:09.864 ************************************ 00:24:09.864 END TEST keyring_linux 00:24:09.864 ************************************ 00:24:10.121 11:42:47 -- common/autotest_common.sh@1142 -- # return 0 00:24:10.121 11:42:47 -- spdk/autotest.sh@308 -- # '[' 0 -eq 1 ']' 00:24:10.121 11:42:47 -- spdk/autotest.sh@312 -- # '[' 0 -eq 1 ']' 00:24:10.121 11:42:47 -- spdk/autotest.sh@316 -- # '[' 0 -eq 1 ']' 00:24:10.121 11:42:47 -- spdk/autotest.sh@321 -- # '[' 0 -eq 1 ']' 00:24:10.121 11:42:47 -- spdk/autotest.sh@330 -- # '[' 0 -eq 1 ']' 00:24:10.121 11:42:47 -- spdk/autotest.sh@335 -- # '[' 0 -eq 1 ']' 00:24:10.121 11:42:47 -- spdk/autotest.sh@339 -- # '[' 0 -eq 1 ']' 00:24:10.121 11:42:47 -- spdk/autotest.sh@343 -- # '[' 0 -eq 1 ']' 00:24:10.121 11:42:47 -- spdk/autotest.sh@347 -- # '[' 0 -eq 1 ']' 00:24:10.121 11:42:47 -- spdk/autotest.sh@352 -- # '[' 0 -eq 1 ']' 00:24:10.121 11:42:47 -- spdk/autotest.sh@356 -- # '[' 0 -eq 1 ']' 00:24:10.121 11:42:47 -- spdk/autotest.sh@363 -- # [[ 0 -eq 1 ]] 00:24:10.121 11:42:47 -- spdk/autotest.sh@367 -- # [[ 0 -eq 1 ]] 00:24:10.121 11:42:47 -- spdk/autotest.sh@371 -- # [[ 0 -eq 1 ]] 00:24:10.121 11:42:47 -- spdk/autotest.sh@375 -- # [[ 0 -eq 1 ]] 00:24:10.121 11:42:47 -- spdk/autotest.sh@380 -- # trap - SIGINT SIGTERM EXIT 00:24:10.121 11:42:47 -- spdk/autotest.sh@382 -- # timing_enter post_cleanup 00:24:10.121 11:42:47 -- common/autotest_common.sh@722 -- # xtrace_disable 00:24:10.121 11:42:47 -- common/autotest_common.sh@10 -- # set +x 00:24:10.121 11:42:47 -- spdk/autotest.sh@383 -- # autotest_cleanup 00:24:10.121 11:42:47 -- common/autotest_common.sh@1392 -- # local autotest_es=0 00:24:10.121 11:42:47 -- common/autotest_common.sh@1393 -- # xtrace_disable 00:24:10.121 11:42:47 -- common/autotest_common.sh@10 -- # set +x 00:24:11.492 INFO: APP EXITING 00:24:11.492 INFO: killing all VMs 00:24:11.492 INFO: killing vhost app 00:24:11.492 INFO: EXIT DONE 00:24:12.057 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:24:12.057 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:24:12.057 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:24:12.620 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:24:12.620 Cleaning 00:24:12.620 Removing: /var/run/dpdk/spdk0/config 00:24:12.620 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:24:12.620 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:24:12.620 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:24:12.620 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:24:12.620 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:24:12.620 Removing: /var/run/dpdk/spdk0/hugepage_info 00:24:12.620 Removing: /var/run/dpdk/spdk1/config 00:24:12.620 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:24:12.620 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:24:12.620 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:24:12.620 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:24:12.620 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:24:12.620 Removing: /var/run/dpdk/spdk1/hugepage_info 00:24:12.620 Removing: /var/run/dpdk/spdk2/config 00:24:12.620 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:24:12.620 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:24:12.620 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:24:12.620 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:24:12.620 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:24:12.620 Removing: /var/run/dpdk/spdk2/hugepage_info 00:24:12.877 Removing: /var/run/dpdk/spdk3/config 00:24:12.877 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:24:12.877 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:24:12.877 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:24:12.877 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:24:12.877 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:24:12.877 Removing: /var/run/dpdk/spdk3/hugepage_info 00:24:12.877 Removing: /var/run/dpdk/spdk4/config 00:24:12.877 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:24:12.877 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:24:12.877 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:24:12.877 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:24:12.877 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:24:12.877 Removing: /var/run/dpdk/spdk4/hugepage_info 00:24:12.877 Removing: /dev/shm/nvmf_trace.0 00:24:12.877 Removing: /dev/shm/spdk_tgt_trace.pid60700 00:24:12.877 Removing: /var/run/dpdk/spdk0 00:24:12.877 Removing: /var/run/dpdk/spdk1 00:24:12.877 Removing: /var/run/dpdk/spdk2 00:24:12.877 Removing: /var/run/dpdk/spdk3 00:24:12.877 Removing: /var/run/dpdk/spdk4 00:24:12.877 Removing: /var/run/dpdk/spdk_pid100023 00:24:12.877 Removing: /var/run/dpdk/spdk_pid100043 00:24:12.877 Removing: /var/run/dpdk/spdk_pid100544 00:24:12.877 Removing: /var/run/dpdk/spdk_pid100694 00:24:12.877 Removing: /var/run/dpdk/spdk_pid100718 00:24:12.877 Removing: /var/run/dpdk/spdk_pid60560 00:24:12.877 Removing: /var/run/dpdk/spdk_pid60700 00:24:12.877 Removing: /var/run/dpdk/spdk_pid60961 00:24:12.877 Removing: /var/run/dpdk/spdk_pid61048 00:24:12.877 Removing: /var/run/dpdk/spdk_pid61074 00:24:12.877 Removing: /var/run/dpdk/spdk_pid61178 00:24:12.877 Removing: /var/run/dpdk/spdk_pid61208 00:24:12.877 Removing: /var/run/dpdk/spdk_pid61326 00:24:12.877 Removing: /var/run/dpdk/spdk_pid61606 00:24:12.877 Removing: /var/run/dpdk/spdk_pid61782 00:24:12.877 Removing: /var/run/dpdk/spdk_pid61853 00:24:12.877 Removing: /var/run/dpdk/spdk_pid61945 00:24:12.877 Removing: /var/run/dpdk/spdk_pid62035 00:24:12.877 Removing: /var/run/dpdk/spdk_pid62073 00:24:12.877 Removing: /var/run/dpdk/spdk_pid62103 00:24:12.877 Removing: /var/run/dpdk/spdk_pid62165 00:24:12.877 Removing: /var/run/dpdk/spdk_pid62266 00:24:12.877 Removing: /var/run/dpdk/spdk_pid62877 00:24:12.877 Removing: /var/run/dpdk/spdk_pid62941 00:24:12.877 Removing: /var/run/dpdk/spdk_pid63010 00:24:12.877 Removing: /var/run/dpdk/spdk_pid63038 00:24:12.877 Removing: /var/run/dpdk/spdk_pid63117 00:24:12.877 Removing: /var/run/dpdk/spdk_pid63126 00:24:12.877 Removing: /var/run/dpdk/spdk_pid63205 00:24:12.877 Removing: /var/run/dpdk/spdk_pid63233 00:24:12.878 Removing: /var/run/dpdk/spdk_pid63279 00:24:12.878 Removing: /var/run/dpdk/spdk_pid63309 00:24:12.878 Removing: /var/run/dpdk/spdk_pid63355 00:24:12.878 Removing: /var/run/dpdk/spdk_pid63385 00:24:12.878 Removing: /var/run/dpdk/spdk_pid63537 00:24:12.878 Removing: /var/run/dpdk/spdk_pid63573 00:24:12.878 Removing: /var/run/dpdk/spdk_pid63646 00:24:12.878 Removing: /var/run/dpdk/spdk_pid63711 00:24:12.878 Removing: /var/run/dpdk/spdk_pid63736 00:24:12.878 Removing: /var/run/dpdk/spdk_pid63794 00:24:12.878 Removing: /var/run/dpdk/spdk_pid63829 00:24:12.878 Removing: /var/run/dpdk/spdk_pid63863 00:24:12.878 Removing: /var/run/dpdk/spdk_pid63900 00:24:12.878 Removing: /var/run/dpdk/spdk_pid63930 00:24:12.878 Removing: /var/run/dpdk/spdk_pid63964 00:24:12.878 Removing: /var/run/dpdk/spdk_pid63999 00:24:12.878 Removing: /var/run/dpdk/spdk_pid64028 00:24:12.878 Removing: /var/run/dpdk/spdk_pid64068 00:24:12.878 Removing: /var/run/dpdk/spdk_pid64097 00:24:12.878 Removing: /var/run/dpdk/spdk_pid64136 00:24:12.878 Removing: /var/run/dpdk/spdk_pid64168 00:24:12.878 Removing: /var/run/dpdk/spdk_pid64197 00:24:12.878 Removing: /var/run/dpdk/spdk_pid64237 00:24:12.878 Removing: /var/run/dpdk/spdk_pid64266 00:24:12.878 Removing: /var/run/dpdk/spdk_pid64301 00:24:12.878 Removing: /var/run/dpdk/spdk_pid64335 00:24:12.878 Removing: /var/run/dpdk/spdk_pid64367 00:24:12.878 Removing: /var/run/dpdk/spdk_pid64410 00:24:12.878 Removing: /var/run/dpdk/spdk_pid64440 00:24:12.878 Removing: /var/run/dpdk/spdk_pid64476 00:24:12.878 Removing: /var/run/dpdk/spdk_pid64540 00:24:12.878 Removing: /var/run/dpdk/spdk_pid64651 00:24:12.878 Removing: /var/run/dpdk/spdk_pid65070 00:24:12.878 Removing: /var/run/dpdk/spdk_pid68351 00:24:12.878 Removing: /var/run/dpdk/spdk_pid68695 00:24:12.878 Removing: /var/run/dpdk/spdk_pid71123 00:24:12.878 Removing: /var/run/dpdk/spdk_pid71493 00:24:12.878 Removing: /var/run/dpdk/spdk_pid71736 00:24:12.878 Removing: /var/run/dpdk/spdk_pid71787 00:24:12.878 Removing: /var/run/dpdk/spdk_pid72409 00:24:12.878 Removing: /var/run/dpdk/spdk_pid72837 00:24:12.878 Removing: /var/run/dpdk/spdk_pid72884 00:24:12.878 Removing: /var/run/dpdk/spdk_pid73250 00:24:12.878 Removing: /var/run/dpdk/spdk_pid73774 00:24:12.878 Removing: /var/run/dpdk/spdk_pid74220 00:24:12.878 Removing: /var/run/dpdk/spdk_pid75130 00:24:12.878 Removing: /var/run/dpdk/spdk_pid76074 00:24:12.878 Removing: /var/run/dpdk/spdk_pid76199 00:24:12.878 Removing: /var/run/dpdk/spdk_pid76263 00:24:12.878 Removing: /var/run/dpdk/spdk_pid77719 00:24:12.878 Removing: /var/run/dpdk/spdk_pid77924 00:24:12.878 Removing: /var/run/dpdk/spdk_pid83298 00:24:12.878 Removing: /var/run/dpdk/spdk_pid83745 00:24:12.878 Removing: /var/run/dpdk/spdk_pid83853 00:24:12.878 Removing: /var/run/dpdk/spdk_pid83986 00:24:12.878 Removing: /var/run/dpdk/spdk_pid84031 00:24:12.878 Removing: /var/run/dpdk/spdk_pid84078 00:24:12.878 Removing: /var/run/dpdk/spdk_pid84124 00:24:12.878 Removing: /var/run/dpdk/spdk_pid84270 00:24:12.878 Removing: /var/run/dpdk/spdk_pid84404 00:24:12.878 Removing: /var/run/dpdk/spdk_pid84643 00:24:13.136 Removing: /var/run/dpdk/spdk_pid84752 00:24:13.136 Removing: /var/run/dpdk/spdk_pid85008 00:24:13.136 Removing: /var/run/dpdk/spdk_pid85122 00:24:13.136 Removing: /var/run/dpdk/spdk_pid85251 00:24:13.136 Removing: /var/run/dpdk/spdk_pid85607 00:24:13.136 Removing: /var/run/dpdk/spdk_pid86017 00:24:13.136 Removing: /var/run/dpdk/spdk_pid86320 00:24:13.136 Removing: /var/run/dpdk/spdk_pid86813 00:24:13.136 Removing: /var/run/dpdk/spdk_pid86822 00:24:13.136 Removing: /var/run/dpdk/spdk_pid87162 00:24:13.136 Removing: /var/run/dpdk/spdk_pid87176 00:24:13.136 Removing: /var/run/dpdk/spdk_pid87196 00:24:13.136 Removing: /var/run/dpdk/spdk_pid87226 00:24:13.136 Removing: /var/run/dpdk/spdk_pid87238 00:24:13.136 Removing: /var/run/dpdk/spdk_pid87588 00:24:13.136 Removing: /var/run/dpdk/spdk_pid87638 00:24:13.136 Removing: /var/run/dpdk/spdk_pid87979 00:24:13.136 Removing: /var/run/dpdk/spdk_pid88230 00:24:13.136 Removing: /var/run/dpdk/spdk_pid88713 00:24:13.136 Removing: /var/run/dpdk/spdk_pid89277 00:24:13.136 Removing: /var/run/dpdk/spdk_pid90639 00:24:13.136 Removing: /var/run/dpdk/spdk_pid91225 00:24:13.136 Removing: /var/run/dpdk/spdk_pid91227 00:24:13.136 Removing: /var/run/dpdk/spdk_pid93168 00:24:13.136 Removing: /var/run/dpdk/spdk_pid93241 00:24:13.136 Removing: /var/run/dpdk/spdk_pid93331 00:24:13.136 Removing: /var/run/dpdk/spdk_pid93426 00:24:13.136 Removing: /var/run/dpdk/spdk_pid93579 00:24:13.136 Removing: /var/run/dpdk/spdk_pid93656 00:24:13.136 Removing: /var/run/dpdk/spdk_pid93745 00:24:13.136 Removing: /var/run/dpdk/spdk_pid93822 00:24:13.136 Removing: /var/run/dpdk/spdk_pid94167 00:24:13.136 Removing: /var/run/dpdk/spdk_pid94855 00:24:13.136 Removing: /var/run/dpdk/spdk_pid96200 00:24:13.136 Removing: /var/run/dpdk/spdk_pid96400 00:24:13.136 Removing: /var/run/dpdk/spdk_pid96667 00:24:13.136 Removing: /var/run/dpdk/spdk_pid96967 00:24:13.136 Removing: /var/run/dpdk/spdk_pid97522 00:24:13.136 Removing: /var/run/dpdk/spdk_pid97527 00:24:13.136 Removing: /var/run/dpdk/spdk_pid97873 00:24:13.136 Removing: /var/run/dpdk/spdk_pid98027 00:24:13.136 Removing: /var/run/dpdk/spdk_pid98183 00:24:13.136 Removing: /var/run/dpdk/spdk_pid98271 00:24:13.136 Removing: /var/run/dpdk/spdk_pid98455 00:24:13.136 Removing: /var/run/dpdk/spdk_pid98564 00:24:13.136 Removing: /var/run/dpdk/spdk_pid99209 00:24:13.136 Removing: /var/run/dpdk/spdk_pid99244 00:24:13.136 Removing: /var/run/dpdk/spdk_pid99279 00:24:13.136 Removing: /var/run/dpdk/spdk_pid99533 00:24:13.136 Removing: /var/run/dpdk/spdk_pid99567 00:24:13.136 Removing: /var/run/dpdk/spdk_pid99598 00:24:13.136 Clean 00:24:13.136 11:42:50 -- common/autotest_common.sh@1451 -- # return 0 00:24:13.136 11:42:50 -- spdk/autotest.sh@384 -- # timing_exit post_cleanup 00:24:13.136 11:42:50 -- common/autotest_common.sh@728 -- # xtrace_disable 00:24:13.136 11:42:50 -- common/autotest_common.sh@10 -- # set +x 00:24:13.136 11:42:50 -- spdk/autotest.sh@386 -- # timing_exit autotest 00:24:13.136 11:42:50 -- common/autotest_common.sh@728 -- # xtrace_disable 00:24:13.136 11:42:50 -- common/autotest_common.sh@10 -- # set +x 00:24:13.136 11:42:50 -- spdk/autotest.sh@387 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:24:13.136 11:42:50 -- spdk/autotest.sh@389 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:24:13.136 11:42:50 -- spdk/autotest.sh@389 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:24:13.136 11:42:50 -- spdk/autotest.sh@391 -- # hash lcov 00:24:13.136 11:42:50 -- spdk/autotest.sh@391 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:24:13.136 11:42:50 -- spdk/autotest.sh@393 -- # hostname 00:24:13.136 11:42:50 -- spdk/autotest.sh@393 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /home/vagrant/spdk_repo/spdk -t fedora38-cloud-1716830599-074-updated-1705279005 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:24:13.394 geninfo: WARNING: invalid characters removed from testname! 00:24:45.456 11:43:19 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:24:46.388 11:43:23 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:24:49.712 11:43:26 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:24:52.239 11:43:29 -- spdk/autotest.sh@397 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:24:55.520 11:43:32 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:24:58.044 11:43:35 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:25:00.617 11:43:38 -- spdk/autotest.sh@400 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:25:00.875 11:43:38 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:25:00.875 11:43:38 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:25:00.875 11:43:38 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:00.875 11:43:38 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:00.875 11:43:38 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:00.875 11:43:38 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:00.875 11:43:38 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:00.875 11:43:38 -- paths/export.sh@5 -- $ export PATH 00:25:00.875 11:43:38 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:00.875 11:43:38 -- common/autobuild_common.sh@443 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:25:00.875 11:43:38 -- common/autobuild_common.sh@444 -- $ date +%s 00:25:00.875 11:43:38 -- common/autobuild_common.sh@444 -- $ mktemp -dt spdk_1721043818.XXXXXX 00:25:00.875 11:43:38 -- common/autobuild_common.sh@444 -- $ SPDK_WORKSPACE=/tmp/spdk_1721043818.XuUe5Y 00:25:00.875 11:43:38 -- common/autobuild_common.sh@446 -- $ [[ -n '' ]] 00:25:00.875 11:43:38 -- common/autobuild_common.sh@450 -- $ '[' -n '' ']' 00:25:00.875 11:43:38 -- common/autobuild_common.sh@453 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:25:00.875 11:43:38 -- common/autobuild_common.sh@457 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:25:00.875 11:43:38 -- common/autobuild_common.sh@459 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:25:00.875 11:43:38 -- common/autobuild_common.sh@460 -- $ get_config_params 00:25:00.875 11:43:38 -- common/autotest_common.sh@396 -- $ xtrace_disable 00:25:00.875 11:43:38 -- common/autotest_common.sh@10 -- $ set +x 00:25:00.876 11:43:38 -- common/autobuild_common.sh@460 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-avahi --with-golang' 00:25:00.876 11:43:38 -- common/autobuild_common.sh@462 -- $ start_monitor_resources 00:25:00.876 11:43:38 -- pm/common@17 -- $ local monitor 00:25:00.876 11:43:38 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:25:00.876 11:43:38 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:25:00.876 11:43:38 -- pm/common@25 -- $ sleep 1 00:25:00.876 11:43:38 -- pm/common@21 -- $ date +%s 00:25:00.876 11:43:38 -- pm/common@21 -- $ date +%s 00:25:00.876 11:43:38 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autopackage.sh.1721043818 00:25:00.876 11:43:38 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autopackage.sh.1721043818 00:25:00.876 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autopackage.sh.1721043818_collect-vmstat.pm.log 00:25:00.876 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autopackage.sh.1721043818_collect-cpu-load.pm.log 00:25:01.811 11:43:39 -- common/autobuild_common.sh@463 -- $ trap stop_monitor_resources EXIT 00:25:01.811 11:43:39 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j10 00:25:01.811 11:43:39 -- spdk/autopackage.sh@11 -- $ cd /home/vagrant/spdk_repo/spdk 00:25:01.811 11:43:39 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:25:01.811 11:43:39 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:25:01.811 11:43:39 -- spdk/autopackage.sh@19 -- $ timing_finish 00:25:01.811 11:43:39 -- common/autotest_common.sh@734 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:25:01.811 11:43:39 -- common/autotest_common.sh@735 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:25:01.811 11:43:39 -- common/autotest_common.sh@737 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:25:01.811 11:43:39 -- spdk/autopackage.sh@20 -- $ exit 0 00:25:01.811 11:43:39 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:25:01.811 11:43:39 -- pm/common@29 -- $ signal_monitor_resources TERM 00:25:01.811 11:43:39 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:25:01.811 11:43:39 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:25:01.811 11:43:39 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:25:01.811 11:43:39 -- pm/common@44 -- $ pid=102429 00:25:01.811 11:43:39 -- pm/common@50 -- $ kill -TERM 102429 00:25:01.811 11:43:39 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:25:01.811 11:43:39 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:25:01.811 11:43:39 -- pm/common@44 -- $ pid=102430 00:25:01.811 11:43:39 -- pm/common@50 -- $ kill -TERM 102430 00:25:01.811 + [[ -n 5156 ]] 00:25:01.811 + sudo kill 5156 00:25:01.823 [Pipeline] } 00:25:01.843 [Pipeline] // timeout 00:25:01.849 [Pipeline] } 00:25:01.869 [Pipeline] // stage 00:25:01.875 [Pipeline] } 00:25:01.888 [Pipeline] // catchError 00:25:01.898 [Pipeline] stage 00:25:01.900 [Pipeline] { (Stop VM) 00:25:01.914 [Pipeline] sh 00:25:02.193 + vagrant halt 00:25:06.380 ==> default: Halting domain... 00:25:11.652 [Pipeline] sh 00:25:11.927 + vagrant destroy -f 00:25:15.333 ==> default: Removing domain... 00:25:15.602 [Pipeline] sh 00:25:15.879 + mv output /var/jenkins/workspace/nvmf-tcp-vg-autotest/output 00:25:15.905 [Pipeline] } 00:25:15.930 [Pipeline] // stage 00:25:15.934 [Pipeline] } 00:25:15.944 [Pipeline] // dir 00:25:15.948 [Pipeline] } 00:25:15.960 [Pipeline] // wrap 00:25:15.966 [Pipeline] } 00:25:15.974 [Pipeline] // catchError 00:25:15.979 [Pipeline] stage 00:25:15.980 [Pipeline] { (Epilogue) 00:25:15.989 [Pipeline] sh 00:25:16.260 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:25:22.853 [Pipeline] catchError 00:25:22.855 [Pipeline] { 00:25:22.869 [Pipeline] sh 00:25:23.146 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:25:23.403 Artifacts sizes are good 00:25:23.412 [Pipeline] } 00:25:23.430 [Pipeline] // catchError 00:25:23.442 [Pipeline] archiveArtifacts 00:25:23.450 Archiving artifacts 00:25:23.643 [Pipeline] cleanWs 00:25:23.655 [WS-CLEANUP] Deleting project workspace... 00:25:23.655 [WS-CLEANUP] Deferred wipeout is used... 00:25:23.662 [WS-CLEANUP] done 00:25:23.664 [Pipeline] } 00:25:23.684 [Pipeline] // stage 00:25:23.691 [Pipeline] } 00:25:23.710 [Pipeline] // node 00:25:23.717 [Pipeline] End of Pipeline 00:25:23.760 Finished: SUCCESS